Jan 28 17:15:52 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 17:15:52 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:52 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 17:15:53 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 17:15:54 crc kubenswrapper[5001]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 17:15:54 crc kubenswrapper[5001]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 17:15:54 crc kubenswrapper[5001]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 17:15:54 crc kubenswrapper[5001]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 17:15:54 crc kubenswrapper[5001]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 17:15:54 crc kubenswrapper[5001]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.334632 5001 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343708 5001 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343747 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343753 5001 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343758 5001 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343764 5001 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343768 5001 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343773 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343778 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343782 5001 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343787 5001 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343791 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343796 5001 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343800 5001 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343804 5001 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343808 5001 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343812 5001 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343818 5001 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343824 5001 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343831 5001 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343838 5001 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343844 5001 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343849 5001 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343854 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343859 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343865 5001 feature_gate.go:330] unrecognized feature gate: Example Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343871 5001 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343876 5001 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343881 5001 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343885 5001 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343889 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343896 5001 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343902 5001 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343906 5001 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343910 5001 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343915 5001 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343928 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343932 5001 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343938 5001 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343944 5001 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343949 5001 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343954 5001 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343958 5001 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343963 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343967 5001 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343993 5001 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.343998 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344002 5001 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344007 5001 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344011 5001 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344015 5001 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344020 5001 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344027 5001 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344032 5001 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344036 5001 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344040 5001 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344044 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344048 5001 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344053 5001 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344057 5001 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344061 5001 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344065 5001 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344070 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344074 5001 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344078 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344083 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344088 5001 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344093 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344098 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344103 5001 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344107 5001 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.344112 5001 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345136 5001 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345151 5001 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345160 5001 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345166 5001 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345172 5001 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345177 5001 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345183 5001 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345189 5001 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345193 5001 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345198 5001 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345202 5001 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345208 5001 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345214 5001 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345218 5001 flags.go:64] FLAG: --cgroup-root="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345222 5001 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345226 5001 flags.go:64] FLAG: --client-ca-file="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345230 5001 flags.go:64] FLAG: --cloud-config="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345234 5001 flags.go:64] FLAG: --cloud-provider="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345238 5001 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345243 5001 flags.go:64] FLAG: --cluster-domain="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345247 5001 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345252 5001 flags.go:64] FLAG: --config-dir="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345256 5001 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345261 5001 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345267 5001 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345271 5001 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345276 5001 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345280 5001 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345284 5001 flags.go:64] FLAG: --contention-profiling="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345288 5001 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345293 5001 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345297 5001 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345301 5001 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345307 5001 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345311 5001 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345315 5001 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345319 5001 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345323 5001 flags.go:64] FLAG: --enable-server="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345327 5001 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345333 5001 flags.go:64] FLAG: --event-burst="100" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345337 5001 flags.go:64] FLAG: --event-qps="50" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345342 5001 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345346 5001 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345351 5001 flags.go:64] FLAG: --eviction-hard="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345358 5001 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345362 5001 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345366 5001 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345370 5001 flags.go:64] FLAG: --eviction-soft="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345375 5001 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345379 5001 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345383 5001 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345386 5001 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345390 5001 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345394 5001 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345398 5001 flags.go:64] FLAG: --feature-gates="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345403 5001 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345407 5001 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345411 5001 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345416 5001 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345420 5001 flags.go:64] FLAG: --healthz-port="10248" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345424 5001 flags.go:64] FLAG: --help="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345428 5001 flags.go:64] FLAG: --hostname-override="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345432 5001 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345436 5001 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345440 5001 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345444 5001 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345448 5001 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345453 5001 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345458 5001 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345462 5001 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345466 5001 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345471 5001 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345475 5001 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345503 5001 flags.go:64] FLAG: --kube-reserved="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345509 5001 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345514 5001 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345520 5001 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345526 5001 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345530 5001 flags.go:64] FLAG: --lock-file="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345535 5001 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345540 5001 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345546 5001 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345554 5001 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345560 5001 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345566 5001 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345571 5001 flags.go:64] FLAG: --logging-format="text" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345577 5001 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345583 5001 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345589 5001 flags.go:64] FLAG: --manifest-url="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345595 5001 flags.go:64] FLAG: --manifest-url-header="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345603 5001 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345608 5001 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345614 5001 flags.go:64] FLAG: --max-pods="110" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345619 5001 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345624 5001 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345629 5001 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345633 5001 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345637 5001 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345641 5001 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345645 5001 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345656 5001 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345660 5001 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345664 5001 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345668 5001 flags.go:64] FLAG: --pod-cidr="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345672 5001 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345679 5001 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345683 5001 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345687 5001 flags.go:64] FLAG: --pods-per-core="0" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345692 5001 flags.go:64] FLAG: --port="10250" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345697 5001 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345702 5001 flags.go:64] FLAG: --provider-id="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345706 5001 flags.go:64] FLAG: --qos-reserved="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345710 5001 flags.go:64] FLAG: --read-only-port="10255" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345714 5001 flags.go:64] FLAG: --register-node="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345718 5001 flags.go:64] FLAG: --register-schedulable="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345722 5001 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345730 5001 flags.go:64] FLAG: --registry-burst="10" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345734 5001 flags.go:64] FLAG: --registry-qps="5" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345738 5001 flags.go:64] FLAG: --reserved-cpus="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345742 5001 flags.go:64] FLAG: --reserved-memory="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345748 5001 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345752 5001 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345757 5001 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345761 5001 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345765 5001 flags.go:64] FLAG: --runonce="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345769 5001 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345773 5001 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345777 5001 flags.go:64] FLAG: --seccomp-default="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345781 5001 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345785 5001 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345790 5001 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345794 5001 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345798 5001 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345802 5001 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345806 5001 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345810 5001 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345815 5001 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345819 5001 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345823 5001 flags.go:64] FLAG: --system-cgroups="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345827 5001 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345834 5001 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345839 5001 flags.go:64] FLAG: --tls-cert-file="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.345843 5001 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348590 5001 flags.go:64] FLAG: --tls-min-version="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348596 5001 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348601 5001 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348607 5001 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348612 5001 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348617 5001 flags.go:64] FLAG: --v="2" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348626 5001 flags.go:64] FLAG: --version="false" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348635 5001 flags.go:64] FLAG: --vmodule="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348642 5001 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.348648 5001 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348762 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348767 5001 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348772 5001 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348776 5001 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348780 5001 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348784 5001 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348788 5001 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348791 5001 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348795 5001 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348798 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348802 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348806 5001 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348810 5001 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348814 5001 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348818 5001 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348822 5001 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348826 5001 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348837 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348841 5001 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348846 5001 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348851 5001 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348855 5001 feature_gate.go:330] unrecognized feature gate: Example Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348860 5001 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348865 5001 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348869 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348874 5001 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348878 5001 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348883 5001 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348887 5001 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348891 5001 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348897 5001 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348903 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348907 5001 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348911 5001 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348916 5001 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348921 5001 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348925 5001 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348930 5001 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348935 5001 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348940 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348944 5001 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348948 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348951 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348955 5001 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348959 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348964 5001 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348968 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348989 5001 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.348994 5001 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349003 5001 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349010 5001 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349015 5001 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349020 5001 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349025 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349029 5001 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349034 5001 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349038 5001 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349043 5001 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349049 5001 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349054 5001 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349059 5001 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349064 5001 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349069 5001 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349074 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349079 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349084 5001 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349090 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349095 5001 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349099 5001 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349103 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.349107 5001 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.349114 5001 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.361009 5001 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.361061 5001 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361155 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361164 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361168 5001 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361173 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361179 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361184 5001 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361205 5001 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361211 5001 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361215 5001 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361220 5001 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361227 5001 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361233 5001 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361237 5001 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361242 5001 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361246 5001 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361250 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361255 5001 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361259 5001 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361263 5001 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361284 5001 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361290 5001 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361295 5001 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361300 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361304 5001 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361308 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361312 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361317 5001 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361322 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361326 5001 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361332 5001 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361337 5001 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361343 5001 feature_gate.go:330] unrecognized feature gate: Example Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361364 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361369 5001 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361373 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361378 5001 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361383 5001 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361388 5001 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361392 5001 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361397 5001 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361401 5001 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361405 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361409 5001 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361413 5001 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361419 5001 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361443 5001 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361447 5001 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361452 5001 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361456 5001 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361461 5001 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361465 5001 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361469 5001 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361473 5001 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361477 5001 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361482 5001 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361486 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361491 5001 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361495 5001 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361552 5001 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361556 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361560 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361564 5001 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361568 5001 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361573 5001 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361578 5001 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361583 5001 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361587 5001 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361591 5001 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361595 5001 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361599 5001 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361603 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.361611 5001 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361798 5001 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361807 5001 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361812 5001 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361817 5001 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361822 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361827 5001 feature_gate.go:330] unrecognized feature gate: Example Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361831 5001 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361835 5001 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361839 5001 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361843 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361847 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361868 5001 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361872 5001 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361876 5001 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361881 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361885 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361889 5001 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361893 5001 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361897 5001 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361902 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361907 5001 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361911 5001 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361916 5001 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361920 5001 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361925 5001 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361943 5001 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361948 5001 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361952 5001 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361956 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361964 5001 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361969 5001 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361992 5001 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.361996 5001 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362001 5001 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362005 5001 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362010 5001 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362015 5001 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362020 5001 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362024 5001 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362028 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362033 5001 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362038 5001 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362043 5001 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362048 5001 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362072 5001 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362077 5001 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362081 5001 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362087 5001 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362091 5001 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362096 5001 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362101 5001 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362105 5001 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362110 5001 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362115 5001 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362120 5001 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362125 5001 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362130 5001 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362152 5001 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362157 5001 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362161 5001 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362166 5001 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362171 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362177 5001 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362182 5001 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362187 5001 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362192 5001 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362196 5001 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362201 5001 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362206 5001 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362227 5001 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.362231 5001 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.362238 5001 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.362481 5001 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.368140 5001 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.368253 5001 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.372649 5001 server.go:997] "Starting client certificate rotation" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.372679 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.372822 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 22:56:22.746295275 +0000 UTC Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.372904 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.396251 5001 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.397474 5001 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.399117 5001 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.420418 5001 log.go:25] "Validated CRI v1 runtime API" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.468087 5001 log.go:25] "Validated CRI v1 image API" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.470394 5001 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.477684 5001 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-17-11-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.477798 5001 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.506909 5001 manager.go:217] Machine: {Timestamp:2026-01-28 17:15:54.500305249 +0000 UTC m=+0.668093549 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b592013b-7faa-4e90-8c4e-8a75265fa756 BootID:dc15c8fd-f2c2-4ad5-902f-cce872e1953a Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5e:a2:45 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5e:a2:45 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2f:fe:3c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2e:3c:30 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9c:a0:26 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:32:bf:e7 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:37:a4:09:9d:3d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3a:8c:12:cf:c1:4d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.507330 5001 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.507592 5001 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.509116 5001 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.509311 5001 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.509342 5001 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.509529 5001 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.509539 5001 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.511046 5001 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.511078 5001 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.511852 5001 state_mem.go:36] "Initialized new in-memory state store" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.512184 5001 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.517771 5001 kubelet.go:418] "Attempting to sync node with API server" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.517796 5001 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.517817 5001 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.517829 5001 kubelet.go:324] "Adding apiserver pod source" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.517840 5001 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.524068 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.524181 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.524067 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.524232 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.527631 5001 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.528361 5001 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.529461 5001 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531079 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531114 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531127 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531140 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531159 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531172 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531184 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531203 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531219 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531232 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531249 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.531262 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.532692 5001 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.533366 5001 server.go:1280] "Started kubelet" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.534825 5001 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:54 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.536571 5001 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.536568 5001 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.542098 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.542157 5001 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.543254 5001 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.543284 5001 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.543378 5001 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.543478 5001 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.541820 5001 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.543524 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:58:56.976962242 +0000 UTC Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.554055 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.554047 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.554194 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.554709 5001 server.go:460] "Adding debug handlers to kubelet server" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.555520 5001 factory.go:153] Registering CRI-O factory Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.555692 5001 factory.go:221] Registration of the crio container factory successfully Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.555887 5001 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.556016 5001 factory.go:55] Registering systemd factory Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.556134 5001 factory.go:221] Registration of the systemd container factory successfully Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.556287 5001 factory.go:103] Registering Raw factory Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.556408 5001 manager.go:1196] Started watching for new ooms in manager Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.554959 5001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef484bc30214d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 17:15:54.533314893 +0000 UTC m=+0.701103133,LastTimestamp:2026-01-28 17:15:54.533314893 +0000 UTC m=+0.701103133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.557879 5001 manager.go:319] Starting recovery of all containers Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562301 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562355 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562365 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562374 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562383 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562392 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562400 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562408 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562419 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562428 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562436 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562446 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562454 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562464 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562472 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562482 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562492 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562500 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562508 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562518 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562528 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562536 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562546 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562556 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562588 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562598 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562610 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562619 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562628 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562637 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562647 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562656 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562666 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562679 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562688 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562696 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562705 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562713 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562738 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562747 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562756 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562765 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562774 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562782 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562791 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562800 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562809 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562818 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562832 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562841 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562849 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562858 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562871 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562880 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562891 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562904 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562917 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562931 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562941 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562950 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562958 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562983 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.562994 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563003 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563012 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563021 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563030 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563039 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563047 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563078 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563086 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563094 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563103 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563112 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563120 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563149 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563158 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563167 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563176 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563183 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563193 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563202 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563211 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563221 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563240 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563249 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563257 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563265 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563303 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563318 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563331 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563344 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563357 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563369 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563380 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563391 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563419 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563427 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563436 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563444 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563452 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563460 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563469 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563477 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563550 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563561 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563570 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563579 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563593 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563608 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563618 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563626 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563645 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563655 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563664 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563673 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563681 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563689 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563698 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563706 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563723 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563731 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563739 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563748 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563756 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563765 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563773 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563781 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563801 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563833 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563842 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563851 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563859 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563867 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563875 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563883 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563901 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563910 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563919 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563929 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563941 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563951 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563961 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.563968 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564015 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564023 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564031 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564073 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564095 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564105 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564114 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564123 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564144 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564152 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564160 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564169 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564177 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564208 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564216 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564225 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564249 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564258 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564267 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564275 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564283 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564291 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564300 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564308 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564325 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564342 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564350 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564358 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564367 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564392 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564401 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564413 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564456 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564472 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564484 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564510 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564546 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564570 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564580 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564588 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564604 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564620 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564628 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564636 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564644 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564662 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564670 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564678 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564696 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564704 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564712 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564720 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564727 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564736 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564745 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564753 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564775 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564792 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564799 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564807 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.564815 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.567165 5001 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.567192 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.567203 5001 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.567251 5001 reconstruct.go:97] "Volume reconstruction finished" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.567262 5001 reconciler.go:26] "Reconciler: start to sync state" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.580684 5001 manager.go:324] Recovery completed Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.590753 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.590853 5001 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.592533 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.592562 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.592575 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.592753 5001 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.592788 5001 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.592812 5001 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.592854 5001 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.593454 5001 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.593477 5001 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.593496 5001 state_mem.go:36] "Initialized new in-memory state store" Jan 28 17:15:54 crc kubenswrapper[5001]: W0128 17:15:54.596337 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.596427 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.620172 5001 policy_none.go:49] "None policy: Start" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.621610 5001 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.621658 5001 state_mem.go:35] "Initializing new in-memory state store" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.643744 5001 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.672221 5001 manager.go:334] "Starting Device Plugin manager" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.672545 5001 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.672609 5001 server.go:79] "Starting device plugin registration server" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.673344 5001 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.673507 5001 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.673716 5001 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.673886 5001 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.673904 5001 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.686755 5001 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.692935 5001 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.693082 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.694494 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.694547 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.694566 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.694749 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696058 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696132 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696133 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696150 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696289 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696584 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.696696 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697362 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697390 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697422 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697460 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697840 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.697867 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698352 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698402 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698563 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698585 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698595 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.698765 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699063 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699195 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699256 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699809 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699839 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.699854 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700063 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700102 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700144 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700182 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700192 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700721 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700747 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.700756 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.755156 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769370 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769425 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769455 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769478 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769498 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769629 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769664 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769688 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769744 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769809 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769840 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769863 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769883 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.769913 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.770002 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.774510 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.775429 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.775474 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.775486 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.775512 5001 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.775995 5001 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871540 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871587 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871612 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871638 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871663 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871684 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871688 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871712 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871731 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871739 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871753 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871745 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871778 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871787 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871786 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871806 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871822 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871830 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871821 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871841 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871818 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871867 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871890 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871895 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871923 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871967 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.871991 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.872050 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.872076 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.872102 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.977086 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.983907 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.983965 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.983992 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:54 crc kubenswrapper[5001]: I0128 17:15:54.984022 5001 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 17:15:54 crc kubenswrapper[5001]: E0128 17:15:54.984600 5001 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.018743 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.037209 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.055712 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.066385 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.069739 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.099086 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-66f901ae541ee7f83d1f272af0d291d9ced466af92ea9dfd89cd81fcd0ec3bc9 WatchSource:0}: Error finding container 66f901ae541ee7f83d1f272af0d291d9ced466af92ea9dfd89cd81fcd0ec3bc9: Status 404 returned error can't find the container with id 66f901ae541ee7f83d1f272af0d291d9ced466af92ea9dfd89cd81fcd0ec3bc9 Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.099718 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-077d0d9bb1f0996617f68260625f4b63f6f2bb64f2a853f5c6f48bb3163c6848 WatchSource:0}: Error finding container 077d0d9bb1f0996617f68260625f4b63f6f2bb64f2a853f5c6f48bb3163c6848: Status 404 returned error can't find the container with id 077d0d9bb1f0996617f68260625f4b63f6f2bb64f2a853f5c6f48bb3163c6848 Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.103113 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f69619215cd2dd9ab5e89c1e207bf85af67a91ecac53d1e85734ecb0563a5860 WatchSource:0}: Error finding container f69619215cd2dd9ab5e89c1e207bf85af67a91ecac53d1e85734ecb0563a5860: Status 404 returned error can't find the container with id f69619215cd2dd9ab5e89c1e207bf85af67a91ecac53d1e85734ecb0563a5860 Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.111282 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5409a918c7bfc2b25a43b9899c66fa961887ee08d289e89acc8046e8fcff9e05 WatchSource:0}: Error finding container 5409a918c7bfc2b25a43b9899c66fa961887ee08d289e89acc8046e8fcff9e05: Status 404 returned error can't find the container with id 5409a918c7bfc2b25a43b9899c66fa961887ee08d289e89acc8046e8fcff9e05 Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.149216 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1112b339c81bc4ed8750ad9f632c1516bc30d141294df2572c2e8f293821c128 WatchSource:0}: Error finding container 1112b339c81bc4ed8750ad9f632c1516bc30d141294df2572c2e8f293821c128: Status 404 returned error can't find the container with id 1112b339c81bc4ed8750ad9f632c1516bc30d141294df2572c2e8f293821c128 Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.155915 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.341772 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.341877 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.385399 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.386781 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.386816 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.386828 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.386853 5001 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.387263 5001 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.478013 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.478114 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.536679 5001 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.544826 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:35:07.095313464 +0000 UTC Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.588277 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.588359 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.596869 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66f901ae541ee7f83d1f272af0d291d9ced466af92ea9dfd89cd81fcd0ec3bc9"} Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.597644 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1112b339c81bc4ed8750ad9f632c1516bc30d141294df2572c2e8f293821c128"} Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.604441 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5409a918c7bfc2b25a43b9899c66fa961887ee08d289e89acc8046e8fcff9e05"} Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.605307 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f69619215cd2dd9ab5e89c1e207bf85af67a91ecac53d1e85734ecb0563a5860"} Jan 28 17:15:55 crc kubenswrapper[5001]: I0128 17:15:55.607103 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"077d0d9bb1f0996617f68260625f4b63f6f2bb64f2a853f5c6f48bb3163c6848"} Jan 28 17:15:55 crc kubenswrapper[5001]: W0128 17:15:55.799776 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.799861 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:55 crc kubenswrapper[5001]: E0128 17:15:55.956888 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.187543 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.188537 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.188570 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.188580 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.188606 5001 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 17:15:56 crc kubenswrapper[5001]: E0128 17:15:56.189057 5001 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.515569 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 17:15:56 crc kubenswrapper[5001]: E0128 17:15:56.516810 5001 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.536048 5001 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.545235 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:13:21.819854294 +0000 UTC Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.613446 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23"} Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.613514 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9"} Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.617152 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf" exitCode=0 Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.617355 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf"} Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.617436 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.618786 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.618843 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.618857 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.620597 5001 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3429222f6770b88d78a5fef2adfe86af27d7a4307b3ff1c319cbf2e0623d42ce" exitCode=0 Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.620664 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.620732 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3429222f6770b88d78a5fef2adfe86af27d7a4307b3ff1c319cbf2e0623d42ce"} Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.620752 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.622469 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.622555 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.622584 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.622898 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.622930 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.622945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.625300 5001 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190" exitCode=0 Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.625388 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.625393 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190"} Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.626807 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.626837 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.626848 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.627305 5001 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02" exitCode=0 Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.627342 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02"} Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.627396 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.628568 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.628610 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:56 crc kubenswrapper[5001]: I0128 17:15:56.628625 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:57 crc kubenswrapper[5001]: W0128 17:15:57.298268 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:57 crc kubenswrapper[5001]: E0128 17:15:57.298656 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:57 crc kubenswrapper[5001]: W0128 17:15:57.317669 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:57 crc kubenswrapper[5001]: E0128 17:15:57.317752 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.536681 5001 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.546116 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:17:35.186261533 +0000 UTC Jan 28 17:15:57 crc kubenswrapper[5001]: E0128 17:15:57.557846 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Jan 28 17:15:57 crc kubenswrapper[5001]: W0128 17:15:57.624491 5001 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:15:57 crc kubenswrapper[5001]: E0128 17:15:57.624643 5001 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.633112 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.633159 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.633170 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.633196 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.637538 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.637577 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.637593 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.640156 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.640206 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.640209 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.641926 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.642019 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.642038 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.646803 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.647013 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.647029 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.647042 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.648727 5001 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="73e394bf3cb9e34c8ba5565e15708aa075d6c9b16a912c33e099779b95923ae5" exitCode=0 Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.648790 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"73e394bf3cb9e34c8ba5565e15708aa075d6c9b16a912c33e099779b95923ae5"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.649000 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.654440 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.654543 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.654572 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.657473 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.657449 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe"} Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.658632 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.658685 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.658695 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.790120 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.791310 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.791355 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.791378 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:57 crc kubenswrapper[5001]: I0128 17:15:57.791404 5001 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 17:15:57 crc kubenswrapper[5001]: E0128 17:15:57.791877 5001 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.546326 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:52:26.191700759 +0000 UTC Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.663167 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500"} Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.663266 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664075 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664118 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664748 5001 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f0572da40dd4036ac385c431e9b95ebdbc440380d573c7840706905e6f64696" exitCode=0 Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664828 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664860 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664866 5001 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664879 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664894 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.664849 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f0572da40dd4036ac385c431e9b95ebdbc440380d573c7840706905e6f64696"} Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.665793 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.665821 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.665829 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.665945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.665986 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.665998 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.666029 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.666046 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.666054 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.666872 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.666895 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.666907 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:58 crc kubenswrapper[5001]: I0128 17:15:58.693443 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.118689 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.194999 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.547001 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:46:14.867805388 +0000 UTC Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676170 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3211c42f91c694206b0d7f9b5d3a8d7074a97fc87b0905427c68f0ced40851a1"} Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676216 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68ca8f7da3cd431aff7e7871d8eac01b92d88ef8f24badfe981f2627ac3ba150"} Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676230 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0062d6857f3d78f9bc6ff8aafb8f7a2f22ebbc2aac830655f16c5711059d9a7c"} Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676241 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b341276721063d6ccd5962f659a7a0ac136d940e51ae61bd6e1a1a693e192a0"} Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676251 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69055403fbc504efd2466fd045ce0012663b8169fc4852bed52b2ef760ad9d11"} Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676257 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676257 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676432 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.676697 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677098 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677127 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677378 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677404 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677413 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677949 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.677995 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:15:59 crc kubenswrapper[5001]: I0128 17:15:59.678008 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.547949 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:24:23.321688997 +0000 UTC Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.570228 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.679685 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.679736 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.680861 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.680911 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.680931 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.680861 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.681020 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.681037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.992320 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.993595 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.993648 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.993661 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:00 crc kubenswrapper[5001]: I0128 17:16:00.993689 5001 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 17:16:01 crc kubenswrapper[5001]: I0128 17:16:01.272096 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 17:16:01 crc kubenswrapper[5001]: I0128 17:16:01.548510 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:46:01.705025238 +0000 UTC Jan 28 17:16:01 crc kubenswrapper[5001]: I0128 17:16:01.681395 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:01 crc kubenswrapper[5001]: I0128 17:16:01.682332 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:01 crc kubenswrapper[5001]: I0128 17:16:01.682366 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:01 crc kubenswrapper[5001]: I0128 17:16:01.682378 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:02 crc kubenswrapper[5001]: I0128 17:16:02.549439 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:23:40.016532373 +0000 UTC Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.550142 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:35:38.08539019 +0000 UTC Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.729523 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.729745 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.731038 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.731065 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.731076 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.772093 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:03 crc kubenswrapper[5001]: I0128 17:16:03.776962 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:04 crc kubenswrapper[5001]: I0128 17:16:04.550443 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:45:23.6935185 +0000 UTC Jan 28 17:16:04 crc kubenswrapper[5001]: I0128 17:16:04.686705 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:04 crc kubenswrapper[5001]: I0128 17:16:04.686785 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:04 crc kubenswrapper[5001]: E0128 17:16:04.686843 5001 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 17:16:04 crc kubenswrapper[5001]: I0128 17:16:04.687875 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:04 crc kubenswrapper[5001]: I0128 17:16:04.687914 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:04 crc kubenswrapper[5001]: I0128 17:16:04.687927 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:05 crc kubenswrapper[5001]: I0128 17:16:05.551295 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:10:37.92694267 +0000 UTC Jan 28 17:16:05 crc kubenswrapper[5001]: I0128 17:16:05.689789 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:05 crc kubenswrapper[5001]: I0128 17:16:05.690845 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:05 crc kubenswrapper[5001]: I0128 17:16:05.690910 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:05 crc kubenswrapper[5001]: I0128 17:16:05.690922 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:06 crc kubenswrapper[5001]: I0128 17:16:06.551746 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:07:55.682977164 +0000 UTC Jan 28 17:16:06 crc kubenswrapper[5001]: I0128 17:16:06.729741 5001 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 17:16:06 crc kubenswrapper[5001]: I0128 17:16:06.729825 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.552295 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:32:44.636304101 +0000 UTC Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.616960 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.617138 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.619653 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.619710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.619725 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.620951 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.695015 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.695984 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.696039 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:07 crc kubenswrapper[5001]: I0128 17:16:07.696049 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:08 crc kubenswrapper[5001]: I0128 17:16:08.310633 5001 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 17:16:08 crc kubenswrapper[5001]: I0128 17:16:08.310740 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 17:16:08 crc kubenswrapper[5001]: I0128 17:16:08.321389 5001 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 17:16:08 crc kubenswrapper[5001]: I0128 17:16:08.321497 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 17:16:08 crc kubenswrapper[5001]: I0128 17:16:08.553104 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:33:41.388872352 +0000 UTC Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.124779 5001 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]log ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]etcd ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/priority-and-fairness-filter ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-apiextensions-informers ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-apiextensions-controllers ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/crd-informer-synced ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-system-namespaces-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 28 17:16:09 crc kubenswrapper[5001]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 28 17:16:09 crc kubenswrapper[5001]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/bootstrap-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/start-kube-aggregator-informers ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-registration-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-discovery-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]autoregister-completion ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-openapi-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 28 17:16:09 crc kubenswrapper[5001]: livez check failed Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.124836 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.350594 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.350775 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.352001 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.352032 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.352042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.433243 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.554407 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:17:47.158323875 +0000 UTC Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.699768 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.700954 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.701042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.701092 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:09 crc kubenswrapper[5001]: I0128 17:16:09.712958 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 17:16:10 crc kubenswrapper[5001]: I0128 17:16:10.555468 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:55:23.931245926 +0000 UTC Jan 28 17:16:10 crc kubenswrapper[5001]: I0128 17:16:10.702282 5001 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 17:16:10 crc kubenswrapper[5001]: I0128 17:16:10.703352 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:10 crc kubenswrapper[5001]: I0128 17:16:10.703410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:10 crc kubenswrapper[5001]: I0128 17:16:10.703425 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:11 crc kubenswrapper[5001]: I0128 17:16:11.555904 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:04:43.68714717 +0000 UTC Jan 28 17:16:12 crc kubenswrapper[5001]: I0128 17:16:12.556438 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:04:13.827693999 +0000 UTC Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.308014 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.312892 5001 trace.go:236] Trace[982687711]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 17:15:58.872) (total time: 14440ms): Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[982687711]: ---"Objects listed" error: 14440ms (17:16:13.312) Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[982687711]: [14.440721207s] [14.440721207s] END Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.312941 5001 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.324245 5001 trace.go:236] Trace[1566148160]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 17:16:02.239) (total time: 11084ms): Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[1566148160]: ---"Objects listed" error: 11084ms (17:16:13.324) Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[1566148160]: [11.084918676s] [11.084918676s] END Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.324281 5001 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.324363 5001 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.324893 5001 trace.go:236] Trace[851212992]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 17:16:01.041) (total time: 12283ms): Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[851212992]: ---"Objects listed" error: 12283ms (17:16:13.324) Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[851212992]: [12.283143763s] [12.283143763s] END Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.324912 5001 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.325737 5001 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.326075 5001 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.326102 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.326426 5001 trace.go:236] Trace[1428756053]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 17:16:02.567) (total time: 10758ms): Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[1428756053]: ---"Objects listed" error: 10756ms (17:16:13.324) Jan 28 17:16:13 crc kubenswrapper[5001]: Trace[1428756053]: [10.758237766s] [10.758237766s] END Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.328083 5001 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.331451 5001 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.336486 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.336529 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.336540 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.336557 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.336571 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.364007 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.368293 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.368338 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.368349 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.368365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.368376 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.372855 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.380286 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.388936 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.388989 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.388998 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.389011 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.389020 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.398265 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.401016 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.401049 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.401059 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.401076 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.401088 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.408924 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.409066 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.410406 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.410432 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.410440 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.410454 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.410465 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.512401 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.512455 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.512464 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.512478 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.512492 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.528553 5001 apiserver.go:52] "Watching apiserver" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.530436 5001 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.530667 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.531004 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.531076 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.531090 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.531101 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.531079 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.531121 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.531293 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.531332 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.531475 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.532533 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.533143 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.533155 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.533224 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.534282 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.534651 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.535031 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.535044 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.537441 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.544650 5001 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.556348 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.556572 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:24:42.81125667 +0000 UTC Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.570915 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.580676 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.594486 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.603600 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.611019 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.614928 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.614992 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.615007 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.615025 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.615037 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.625562 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626424 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626504 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626547 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626578 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626603 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626626 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626647 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626670 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626689 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626709 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626702 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626732 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626753 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626774 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626795 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626819 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626855 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626883 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626907 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626910 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.626986 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627014 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627036 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627055 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627161 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627183 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627205 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627228 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627262 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627284 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627305 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627321 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627338 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627356 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627377 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627402 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627423 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627443 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627466 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627481 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627496 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627516 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627534 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627559 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627579 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627621 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627640 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627659 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627676 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627691 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627709 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627729 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627760 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627778 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627810 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627825 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627840 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627854 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627884 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627916 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627940 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.627968 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628018 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628035 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628052 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628069 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628087 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628102 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628120 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628136 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628153 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628158 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628172 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628183 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628222 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628252 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628285 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628310 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628342 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628346 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628366 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628388 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628411 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628434 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628451 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628456 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628531 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628565 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628592 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628623 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628632 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628659 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628695 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628721 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628745 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628770 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628795 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628825 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628848 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628871 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628886 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628902 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628919 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628936 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628987 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629004 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629020 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629036 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629055 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629073 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629089 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629120 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629138 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629155 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629171 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629189 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629206 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629222 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629240 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629259 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629276 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629293 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629307 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629332 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629362 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629378 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629395 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629412 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629432 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629448 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629465 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629487 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629510 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629532 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629551 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629568 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629586 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629603 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629618 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629636 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629654 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629671 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629687 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629704 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629720 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629738 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629753 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629802 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629820 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629843 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629862 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629879 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629895 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629911 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629928 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629948 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629968 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630036 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630057 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630076 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630093 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630140 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630159 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630176 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630193 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630210 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630228 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630246 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630264 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630284 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630304 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630322 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630345 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630363 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630381 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630397 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630414 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630431 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630448 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630467 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630494 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630515 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630531 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630549 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630566 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630583 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630601 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630619 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630640 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630657 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630674 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630690 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630709 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630728 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630745 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630765 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630783 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630799 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630817 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630834 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630853 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630869 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630911 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630990 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631019 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631051 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631078 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631100 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631128 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631158 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631185 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631207 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631225 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631244 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631264 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631282 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631387 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631406 5001 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631422 5001 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631440 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631454 5001 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631470 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.628946 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629343 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629350 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629491 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629806 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629827 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.629904 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630019 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630183 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630272 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630460 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630569 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630685 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630716 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630885 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.630922 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631149 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631482 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631647 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.631731 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632002 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632157 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632301 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632297 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632310 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632402 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632543 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635870 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632668 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.632758 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:14.132735069 +0000 UTC m=+20.300523319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632866 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.632899 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.633215 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.633250 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.633280 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.633469 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.633701 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.634576 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.634672 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.634881 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635120 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635212 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635424 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635439 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636057 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635455 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635928 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.635999 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636147 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636198 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636384 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636395 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636545 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636589 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636745 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636762 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636841 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636877 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.637173 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.637259 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.637376 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:14.137347637 +0000 UTC m=+20.305135867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.638033 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.638141 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.638238 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.638713 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.638781 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.638841 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:14.138820228 +0000 UTC m=+20.306608468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.639654 5001 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.640483 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.644219 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.644543 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.645096 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.645133 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.645342 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.647496 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.651177 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.651335 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.651560 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.651758 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.652067 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.652270 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.652299 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.652312 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.652700 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.653291 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.653763 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.653828 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.653849 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.653862 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654048 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654308 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654423 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654562 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654572 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654632 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.654937 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.656903 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.657698 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.657747 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.657763 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.658094 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.658233 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.658534 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.658545 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.658783 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.636104 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.659053 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.659162 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.659182 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:14.159164971 +0000 UTC m=+20.326953201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:13 crc kubenswrapper[5001]: E0128 17:16:13.659251 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:14.159217332 +0000 UTC m=+20.327005722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.659844 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.660132 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.653658 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661003 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661108 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661400 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661504 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661729 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661922 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661999 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.662057 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.661919 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.662233 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.662455 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.662546 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.663455 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.663781 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664145 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664290 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664403 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664423 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664471 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664767 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664875 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665017 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665021 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.664687 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665180 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665298 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665476 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665657 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665655 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.665674 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.666439 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.666750 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.666857 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.667180 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.667812 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.667877 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.668430 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.668832 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.668571 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669036 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669090 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669191 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669534 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669771 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669894 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.669911 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670056 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670474 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670497 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670568 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670612 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670743 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.670759 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671201 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671245 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671396 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671405 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671556 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671721 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.671997 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.672549 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.673005 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.673304 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.673479 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.673411 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.673880 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.674533 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676139 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676154 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676355 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676515 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676556 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676579 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676757 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676913 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.676997 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677076 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677095 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677214 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677384 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677429 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677496 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677227 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677690 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677747 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677897 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.677969 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.678078 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.678242 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.678636 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.679477 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.679683 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.680226 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.680334 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.683267 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.694799 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.696861 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.704644 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.714629 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.716580 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717178 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717187 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717203 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717219 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500" exitCode=255 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717214 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717281 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.717646 5001 scope.go:117] "RemoveContainer" containerID="b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.727106 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732150 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732184 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732234 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732244 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732253 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732263 5001 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732271 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732279 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732287 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732295 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732302 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732310 5001 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732318 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732325 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732333 5001 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732342 5001 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732350 5001 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732363 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732371 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732379 5001 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732386 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732395 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732411 5001 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732419 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732426 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732434 5001 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732442 5001 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732449 5001 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732569 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732787 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732806 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732920 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732937 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732951 5001 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732962 5001 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.732987 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733001 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733012 5001 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733024 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733036 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733048 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733193 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733208 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733217 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733225 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733233 5001 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733242 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733250 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733258 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733266 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733274 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733282 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733291 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733300 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733308 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733317 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733324 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733332 5001 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733339 5001 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733347 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733354 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733363 5001 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733371 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733379 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733387 5001 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733395 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733404 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733412 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733419 5001 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733429 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733440 5001 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733480 5001 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733490 5001 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733501 5001 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733511 5001 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733521 5001 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733570 5001 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733578 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733587 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733594 5001 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733643 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733654 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733663 5001 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733670 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733679 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733707 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733715 5001 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733724 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733746 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733755 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733796 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733806 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733814 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733823 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733833 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733882 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733892 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733900 5001 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733908 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733915 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733941 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733949 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733959 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.733967 5001 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734023 5001 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734034 5001 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734045 5001 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734054 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734062 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734088 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734099 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734106 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734115 5001 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734123 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734131 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734139 5001 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734181 5001 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734189 5001 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734198 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734205 5001 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734213 5001 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734221 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734230 5001 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734238 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734246 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734254 5001 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734261 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734269 5001 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734278 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734286 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734293 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734301 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734309 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734317 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734325 5001 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734333 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734341 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734348 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734356 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734364 5001 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734372 5001 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734380 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734388 5001 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734396 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734405 5001 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734413 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734421 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734429 5001 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734437 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734445 5001 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734453 5001 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734461 5001 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734468 5001 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734476 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734484 5001 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734491 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734499 5001 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734507 5001 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734515 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734526 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734535 5001 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734542 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734550 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734558 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734566 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734573 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734581 5001 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734589 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734597 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734606 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734613 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734621 5001 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734629 5001 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734637 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734644 5001 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734652 5001 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734660 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734669 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734676 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734684 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734692 5001 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734700 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734709 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734718 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734727 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734735 5001 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734743 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734752 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734761 5001 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734769 5001 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734777 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.734787 5001 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.736362 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.739766 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.739600 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.743124 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.750426 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.759108 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.768656 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.778135 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.787821 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.797147 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.806398 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.815612 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.820429 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.820465 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.820476 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.820500 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.820514 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.826333 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.850321 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.850713 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.851107 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.859194 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 17:16:13 crc kubenswrapper[5001]: W0128 17:16:13.865806 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-675babf4ac36d65bcbfd3894137c30921ca7408bc6e3aaed272181d94bdd7808 WatchSource:0}: Error finding container 675babf4ac36d65bcbfd3894137c30921ca7408bc6e3aaed272181d94bdd7808: Status 404 returned error can't find the container with id 675babf4ac36d65bcbfd3894137c30921ca7408bc6e3aaed272181d94bdd7808 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.878184 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: W0128 17:16:13.881708 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-48aa38a92009ef7027b6d64c19920212b5d3fa358fc3f7c0f2352ef2db48e885 WatchSource:0}: Error finding container 48aa38a92009ef7027b6d64c19920212b5d3fa358fc3f7c0f2352ef2db48e885: Status 404 returned error can't find the container with id 48aa38a92009ef7027b6d64c19920212b5d3fa358fc3f7c0f2352ef2db48e885 Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.890423 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.903393 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.923934 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.924035 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.924052 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.924068 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:13 crc kubenswrapper[5001]: I0128 17:16:13.924078 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:13Z","lastTransitionTime":"2026-01-28T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.026472 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.026502 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.026512 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.026528 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.026538 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.123948 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.136408 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.136584 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:15.13656806 +0000 UTC m=+21.304356290 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.138742 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.141752 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.141774 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.141783 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.141797 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.141806 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.150621 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.160509 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.169061 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.177394 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.188667 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.198105 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.201936 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.209838 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.218963 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.235120 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.237316 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.237348 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.237365 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.237385 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237434 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237443 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237457 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237471 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237477 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237528 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237549 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237489 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:15.237477152 +0000 UTC m=+21.405265382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237563 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237587 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:15.237564764 +0000 UTC m=+21.405352994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237631 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:15.237596335 +0000 UTC m=+21.405384645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.237651 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:15.237644567 +0000 UTC m=+21.405432787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.243669 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.243692 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.243699 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.243711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.243719 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.246281 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.255963 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.264680 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.275646 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.285375 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.296935 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.345712 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.345757 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.345765 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.345778 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.345787 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.448333 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.448377 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.448395 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.448419 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.448435 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.550701 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.550747 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.550755 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.550789 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.550799 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.557180 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:47:58.631201428 +0000 UTC Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.597466 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.598065 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.599487 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.600228 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.601306 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.601873 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.602587 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.603554 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.604177 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.605194 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.605677 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.606753 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.607283 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.607842 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.608830 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.609169 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.609385 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.610331 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.610769 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.611385 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.612433 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.612922 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.613875 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.614336 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.615441 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.615996 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.616654 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.617771 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.618290 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.619278 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.619745 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.620611 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.620746 5001 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.620863 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.622564 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.623490 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.623930 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.625553 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.626217 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.627170 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.627868 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.629170 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.629599 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.630615 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.631277 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.632284 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.632435 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.633020 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.634080 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.634814 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.636159 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.636651 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.637549 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.638045 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.638961 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.639600 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.640234 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.645040 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.653419 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.653475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.653487 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.653502 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.653513 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.655569 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.667392 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.681398 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.697577 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.720147 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"675babf4ac36d65bcbfd3894137c30921ca7408bc6e3aaed272181d94bdd7808"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.725121 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.727156 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.727440 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.729195 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.729254 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d0f2bff525ba38e9b57b44d3b8064ba395b200ca3ca13defcf8056f526015386"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.730783 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.730815 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.730855 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"48aa38a92009ef7027b6d64c19920212b5d3fa358fc3f7c0f2352ef2db48e885"} Jan 28 17:16:14 crc kubenswrapper[5001]: E0128 17:16:14.736045 5001 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.738304 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.752870 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.755479 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.755514 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.755525 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.755543 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.755555 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.766112 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.778358 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.796713 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.808409 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.831466 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.857840 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.857874 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.857885 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.857898 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.857908 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.859150 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.877877 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.892476 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.905732 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.919034 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.931247 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.945010 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.957011 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.960191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.960237 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.960246 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.960260 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.960271 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:14Z","lastTransitionTime":"2026-01-28T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:14 crc kubenswrapper[5001]: I0128 17:16:14.969809 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.063224 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.063291 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.063306 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.063329 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.063342 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.145709 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.145847 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:17.145826865 +0000 UTC m=+23.313615095 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.165339 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.165376 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.165387 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.165402 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.165411 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.246861 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.246902 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.246928 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.246950 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247025 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247047 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247082 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:17.247065156 +0000 UTC m=+23.414853386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247098 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:17.247090317 +0000 UTC m=+23.414878547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247173 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247262 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247276 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247332 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:17.247313693 +0000 UTC m=+23.415101973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247173 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247364 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247372 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.247406 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:17.247397675 +0000 UTC m=+23.415185985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.267667 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.267884 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.267892 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.267905 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.267915 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.370242 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.370286 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.370298 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.370316 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.370328 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.472708 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.472759 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.472772 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.472792 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.472805 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.557470 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:43:08.969749673 +0000 UTC Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.575737 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.575768 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.575777 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.575792 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.575805 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.593322 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.593442 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.593815 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.593871 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.593914 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:15 crc kubenswrapper[5001]: E0128 17:16:15.593967 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.678053 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.678097 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.678105 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.678120 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.678131 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.780500 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.780566 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.780579 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.780598 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.780611 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.883939 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.884186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.884278 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.884371 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.884460 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.986962 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.987461 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.987528 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.987593 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:15 crc kubenswrapper[5001]: I0128 17:16:15.987648 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:15Z","lastTransitionTime":"2026-01-28T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.090320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.090383 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.090406 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.090431 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.090448 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.193171 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.193213 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.193225 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.193243 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.193254 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.295952 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.295998 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.296008 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.296023 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.296034 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.397849 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.397897 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.397908 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.397924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.397939 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.501148 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.501231 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.501250 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.501277 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.501295 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.557761 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:34:25.163132295 +0000 UTC Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.603732 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.603818 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.603828 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.603849 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.603862 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.707071 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.707121 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.707132 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.707152 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.707170 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.737009 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.754342 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.768200 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.779424 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.791527 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.801771 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.810224 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.810279 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.810292 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.810308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.810319 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.815219 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.827426 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.840455 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.911997 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.912037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.912048 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.912063 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:16 crc kubenswrapper[5001]: I0128 17:16:16.912074 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:16Z","lastTransitionTime":"2026-01-28T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.014148 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.014205 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.014221 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.014249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.014262 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.116791 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.116837 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.116849 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.116865 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.116881 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.164220 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.164423 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:21.164390485 +0000 UTC m=+27.332178715 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.219199 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.219240 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.219250 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.219269 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.219280 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.264847 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.264904 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.264925 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.264944 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265043 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265090 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:21.265076831 +0000 UTC m=+27.432865061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265187 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265242 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265262 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265322 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265374 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265377 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265389 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265353 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:21.265326118 +0000 UTC m=+27.433114388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265655 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:21.265613186 +0000 UTC m=+27.433401456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.265703 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:21.265689388 +0000 UTC m=+27.433477648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.321895 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.321954 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.321969 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.322022 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.322042 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.424417 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.424468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.424477 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.424491 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.424502 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.527370 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.527419 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.527428 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.527446 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.527456 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.558824 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:06:36.347239202 +0000 UTC Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.593855 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.593885 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.593885 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.594012 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.594106 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:17 crc kubenswrapper[5001]: E0128 17:16:17.594184 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.629407 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.629449 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.629458 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.629475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.629495 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.731899 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.731935 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.731948 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.731962 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.731983 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.834595 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.834661 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.834678 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.834697 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.834735 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.936996 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.937047 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.937059 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.937076 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:17 crc kubenswrapper[5001]: I0128 17:16:17.937089 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:17Z","lastTransitionTime":"2026-01-28T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.039503 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.039559 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.039573 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.039590 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.039603 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.142242 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.142293 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.142305 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.142323 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.142334 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.245792 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.245836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.245846 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.245860 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.245870 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.348292 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.348362 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.348373 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.348388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.348398 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.452792 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.452879 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.452950 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.453001 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.453029 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.556919 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.557064 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.557091 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.557124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.557151 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.559035 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:25:57.669609588 +0000 UTC Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.660808 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.660901 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.660944 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.660966 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.661006 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.763602 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.763663 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.763676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.763694 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.763706 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.866700 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.866779 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.866818 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.866838 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.866853 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.969617 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.969676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.969691 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.969706 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:18 crc kubenswrapper[5001]: I0128 17:16:18.969719 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:18Z","lastTransitionTime":"2026-01-28T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.072772 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.072835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.072847 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.072862 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.072876 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.175824 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.175874 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.175885 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.175902 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.175913 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.279115 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.279164 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.279175 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.279190 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.279201 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.381137 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.381175 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.381185 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.381199 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.381208 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.483255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.483289 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.483297 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.483310 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.483319 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.559574 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 18:22:33.056116338 +0000 UTC Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.586113 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.586153 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.586161 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.586175 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.586185 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.593679 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.593762 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.593725 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:19 crc kubenswrapper[5001]: E0128 17:16:19.593912 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:19 crc kubenswrapper[5001]: E0128 17:16:19.594073 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:19 crc kubenswrapper[5001]: E0128 17:16:19.594406 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.689715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.689772 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.689787 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.689810 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.689827 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.792546 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.792584 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.792624 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.792644 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.792654 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.895296 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.895347 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.895359 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.895380 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.895392 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.993681 5001 csr.go:261] certificate signing request csr-9s5wr is approved, waiting to be issued Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.997360 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.997393 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.997402 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.997420 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:19 crc kubenswrapper[5001]: I0128 17:16:19.997429 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:19Z","lastTransitionTime":"2026-01-28T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.025238 5001 csr.go:257] certificate signing request csr-9s5wr is issued Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.100266 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.100320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.100334 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.100358 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.100371 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.202543 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.202598 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.202609 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.202630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.202643 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.305272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.305336 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.305348 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.305368 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.305383 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.408028 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.408072 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.408083 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.408099 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.408112 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.510198 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.510237 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.510249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.510265 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.510277 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.560154 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:05:13.231912144 +0000 UTC Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.612044 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.612080 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.612095 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.612109 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.612119 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.714232 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.714272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.714284 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.714301 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.714314 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.815995 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.816038 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.816049 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.816067 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.816078 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.918253 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.918304 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.918317 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.918333 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:20 crc kubenswrapper[5001]: I0128 17:16:20.918344 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:20Z","lastTransitionTime":"2026-01-28T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.004037 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qz9lj"] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.004308 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.005898 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mqgwk"] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.006445 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.006534 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.008817 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7fgxj"] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.009242 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.009895 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.011245 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.011360 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.011580 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.011801 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.011864 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 17:16:21 crc kubenswrapper[5001]: W0128 17:16:21.011941 5001 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.011990 5001 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 17:16:21 crc kubenswrapper[5001]: W0128 17:16:21.012209 5001 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.012228 5001 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.012752 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 17:16:21 crc kubenswrapper[5001]: W0128 17:16:21.012821 5001 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.012850 5001 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.012964 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 17:16:21 crc kubenswrapper[5001]: W0128 17:16:21.013301 5001 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.013323 5001 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.019997 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.020025 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.020033 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.020048 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.020058 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.022468 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.025948 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 17:11:20 +0000 UTC, rotation deadline is 2026-11-11 20:19:20.124046658 +0000 UTC Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.025990 5001 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6891h2m59.098059389s for next certificate rotation Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.035929 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.047008 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.056101 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.068904 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.079352 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.089072 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.099138 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8de2d052-6f7c-4345-91fa-ba2fc7532251-mcd-auth-proxy-config\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.099204 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfr44\" (UniqueName: \"kubernetes.io/projected/652f7f95-a748-4fd4-b323-19a93494ddc0-kube-api-access-mfr44\") pod \"node-resolver-qz9lj\" (UID: \"652f7f95-a748-4fd4-b323-19a93494ddc0\") " pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.099256 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8de2d052-6f7c-4345-91fa-ba2fc7532251-rootfs\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.099282 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/652f7f95-a748-4fd4-b323-19a93494ddc0-hosts-file\") pod \"node-resolver-qz9lj\" (UID: \"652f7f95-a748-4fd4-b323-19a93494ddc0\") " pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.099304 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de2d052-6f7c-4345-91fa-ba2fc7532251-proxy-tls\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.099325 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsl5\" (UniqueName: \"kubernetes.io/projected/8de2d052-6f7c-4345-91fa-ba2fc7532251-kube-api-access-thsl5\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.107385 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.118359 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.121695 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.121744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.121757 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.121773 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.121783 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.129682 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.139954 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.150409 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.160706 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.174636 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.186041 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.199908 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200012 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-etc-kubernetes\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200031 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-os-release\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200051 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/652f7f95-a748-4fd4-b323-19a93494ddc0-hosts-file\") pod \"node-resolver-qz9lj\" (UID: \"652f7f95-a748-4fd4-b323-19a93494ddc0\") " pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200067 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de2d052-6f7c-4345-91fa-ba2fc7532251-proxy-tls\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200084 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thsl5\" (UniqueName: \"kubernetes.io/projected/8de2d052-6f7c-4345-91fa-ba2fc7532251-kube-api-access-thsl5\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.200117 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:29.200091146 +0000 UTC m=+35.367879376 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200181 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8de2d052-6f7c-4345-91fa-ba2fc7532251-mcd-auth-proxy-config\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200209 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3cd579b1-57ae-4f44-85b5-53b6c746078b-cni-binary-copy\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200206 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/652f7f95-a748-4fd4-b323-19a93494ddc0-hosts-file\") pod \"node-resolver-qz9lj\" (UID: \"652f7f95-a748-4fd4-b323-19a93494ddc0\") " pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200235 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfr44\" (UniqueName: \"kubernetes.io/projected/652f7f95-a748-4fd4-b323-19a93494ddc0-kube-api-access-mfr44\") pod \"node-resolver-qz9lj\" (UID: \"652f7f95-a748-4fd4-b323-19a93494ddc0\") " pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200276 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-k8s-cni-cncf-io\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200302 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-hostroot\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200323 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-cnibin\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200344 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-kubelet\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200361 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-conf-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200398 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-socket-dir-parent\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200421 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-netns\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200435 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-cni-bin\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200451 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-cni-multus\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200510 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8de2d052-6f7c-4345-91fa-ba2fc7532251-rootfs\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200550 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-cni-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200565 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-daemon-config\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200587 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8de2d052-6f7c-4345-91fa-ba2fc7532251-rootfs\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200614 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-multus-certs\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200644 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88h9g\" (UniqueName: \"kubernetes.io/projected/3cd579b1-57ae-4f44-85b5-53b6c746078b-kube-api-access-88h9g\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.200680 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-system-cni-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.201061 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8de2d052-6f7c-4345-91fa-ba2fc7532251-mcd-auth-proxy-config\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.205566 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de2d052-6f7c-4345-91fa-ba2fc7532251-proxy-tls\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.215180 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.224461 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.224493 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.224504 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.224520 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.224531 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.226533 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfr44\" (UniqueName: \"kubernetes.io/projected/652f7f95-a748-4fd4-b323-19a93494ddc0-kube-api-access-mfr44\") pod \"node-resolver-qz9lj\" (UID: \"652f7f95-a748-4fd4-b323-19a93494ddc0\") " pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.227545 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thsl5\" (UniqueName: \"kubernetes.io/projected/8de2d052-6f7c-4345-91fa-ba2fc7532251-kube-api-access-thsl5\") pod \"machine-config-daemon-mqgwk\" (UID: \"8de2d052-6f7c-4345-91fa-ba2fc7532251\") " pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.237763 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.255388 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.267648 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.276754 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301505 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301555 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-socket-dir-parent\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301580 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301598 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-cni-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301616 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-netns\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301631 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-cni-bin\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301648 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-cni-multus\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301665 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-daemon-config\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301671 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-socket-dir-parent\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.301674 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301720 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-multus-certs\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.301739 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301761 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-cni-multus\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.301766 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.301782 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301783 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-cni-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301721 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-cni-bin\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.301770 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:29.301752879 +0000 UTC m=+35.469541109 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301839 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-netns\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.301849 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:29.301835511 +0000 UTC m=+35.469623851 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301681 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-multus-certs\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301883 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88h9g\" (UniqueName: \"kubernetes.io/projected/3cd579b1-57ae-4f44-85b5-53b6c746078b-kube-api-access-88h9g\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301904 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-system-cni-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301928 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-etc-kubernetes\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.301955 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-os-release\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302000 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302021 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3cd579b1-57ae-4f44-85b5-53b6c746078b-cni-binary-copy\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302040 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302059 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-etc-kubernetes\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302063 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-k8s-cni-cncf-io\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302086 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-hostroot\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302114 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-cnibin\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302134 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-kubelet\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302150 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-conf-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302195 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-conf-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.302233 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.302260 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:29.302253193 +0000 UTC m=+35.470041413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302304 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-os-release\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302318 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-system-cni-dir\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302348 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-run-k8s-cni-cncf-io\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.302362 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302374 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-hostroot\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.302379 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.302402 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302442 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3cd579b1-57ae-4f44-85b5-53b6c746078b-multus-daemon-config\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302445 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-cnibin\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.302451 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:29.302435578 +0000 UTC m=+35.470223908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.302473 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd579b1-57ae-4f44-85b5-53b6c746078b-host-var-lib-kubelet\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.319760 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qz9lj" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.326380 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.326413 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.326426 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.326442 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.326454 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.327720 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:16:21 crc kubenswrapper[5001]: W0128 17:16:21.332129 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652f7f95_a748_4fd4_b323_19a93494ddc0.slice/crio-901e15a284ce24fa5c474f63629b6e8c5af0a1450acbafc5aedd644138ffbb94 WatchSource:0}: Error finding container 901e15a284ce24fa5c474f63629b6e8c5af0a1450acbafc5aedd644138ffbb94: Status 404 returned error can't find the container with id 901e15a284ce24fa5c474f63629b6e8c5af0a1450acbafc5aedd644138ffbb94 Jan 28 17:16:21 crc kubenswrapper[5001]: W0128 17:16:21.339501 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de2d052_6f7c_4345_91fa_ba2fc7532251.slice/crio-4b92ce745fc1cce219c80b777048b5a1a1fa5c591e6404eb096aa1206043e596 WatchSource:0}: Error finding container 4b92ce745fc1cce219c80b777048b5a1a1fa5c591e6404eb096aa1206043e596: Status 404 returned error can't find the container with id 4b92ce745fc1cce219c80b777048b5a1a1fa5c591e6404eb096aa1206043e596 Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.381643 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dhcr2"] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.382294 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cnffr"] Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.383033 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.383095 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.385936 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.386291 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.386432 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.386671 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.387172 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.387190 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.387277 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.387359 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.387411 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.398023 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.412757 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.424957 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.429908 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.429940 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.429950 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.429966 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.430001 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.441999 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.455014 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.467876 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.478056 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.490359 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503495 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-slash\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503551 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-systemd-units\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503578 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-netns\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503601 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503647 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-node-log\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503716 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-netd\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503738 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwvf\" (UniqueName: \"kubernetes.io/projected/324b03b5-a748-440b-b1ad-15022599b855-kube-api-access-chwvf\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503777 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-systemd\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503812 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324b03b5-a748-440b-b1ad-15022599b855-ovn-node-metrics-cert\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503834 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-bin\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503873 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503928 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-kubelet\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.503967 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-config\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504060 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-ovn\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504102 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-script-lib\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504155 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-etc-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504394 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cnibin\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504610 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btg7l\" (UniqueName: \"kubernetes.io/projected/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-kube-api-access-btg7l\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504691 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504727 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-system-cni-dir\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504764 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-var-lib-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504786 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504807 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cni-binary-copy\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504833 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-log-socket\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504851 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-ovn-kubernetes\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504871 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-env-overrides\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.504891 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-os-release\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.508041 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.523698 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.532064 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.533363 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.533492 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.533639 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.533808 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.538113 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.552800 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.560834 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:11:46.54793629 +0000 UTC Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.574132 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.593383 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.593470 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.593396 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.593568 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.593807 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:21 crc kubenswrapper[5001]: E0128 17:16:21.594072 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.594437 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605395 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-slash\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605440 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-netns\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605463 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605488 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-systemd-units\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605518 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-node-log\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605537 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-netd\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605538 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-slash\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605559 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-systemd\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605579 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324b03b5-a748-440b-b1ad-15022599b855-ovn-node-metrics-cert\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605603 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chwvf\" (UniqueName: \"kubernetes.io/projected/324b03b5-a748-440b-b1ad-15022599b855-kube-api-access-chwvf\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605630 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-netns\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605634 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-bin\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605657 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605680 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-kubelet\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605702 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-config\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605734 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-ovn\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605754 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-script-lib\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605787 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-etc-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605807 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cnibin\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605847 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605870 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btg7l\" (UniqueName: \"kubernetes.io/projected/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-kube-api-access-btg7l\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605892 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-system-cni-dir\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605938 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-var-lib-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605962 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606004 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cni-binary-copy\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606026 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-log-socket\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606047 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-ovn-kubernetes\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606067 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-env-overrides\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606088 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-os-release\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606184 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-os-release\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606234 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606260 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-var-lib-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606337 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-log-socket\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606369 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-etc-openvswitch\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606396 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cnibin\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606413 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606426 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606454 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-system-cni-dir\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606692 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-systemd\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606734 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-node-log\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606770 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-netd\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606806 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-kubelet\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606838 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-bin\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606840 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-ovn-kubernetes\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606859 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-script-lib\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606905 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-ovn\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.606912 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-env-overrides\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.605602 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-systemd-units\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.607601 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.607878 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-config\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.609463 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.610242 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324b03b5-a748-440b-b1ad-15022599b855-ovn-node-metrics-cert\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.625538 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chwvf\" (UniqueName: \"kubernetes.io/projected/324b03b5-a748-440b-b1ad-15022599b855-kube-api-access-chwvf\") pod \"ovnkube-node-cnffr\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.630130 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.636883 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.637186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.637279 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.637398 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.637489 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.643512 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.658161 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.672779 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.688000 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.702831 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.714774 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.725328 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.741150 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.741189 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.741200 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.741216 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.741225 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.742605 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.753113 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qz9lj" event={"ID":"652f7f95-a748-4fd4-b323-19a93494ddc0","Type":"ContainerStarted","Data":"a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.753162 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qz9lj" event={"ID":"652f7f95-a748-4fd4-b323-19a93494ddc0","Type":"ContainerStarted","Data":"901e15a284ce24fa5c474f63629b6e8c5af0a1450acbafc5aedd644138ffbb94"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.754964 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"3a545b66a10a7b355086626a2562795ad59ca42ed460cf9800b4d0de3b86ca5a"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.755375 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.758128 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.758360 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.758457 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"4b92ce745fc1cce219c80b777048b5a1a1fa5c591e6404eb096aa1206043e596"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.769673 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.783178 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.796239 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.823466 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.836307 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.843646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.843697 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.843705 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.843720 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.843730 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.850046 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.862961 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.877586 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.892306 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.908245 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.926056 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.932962 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.945880 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.945945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.945956 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.946005 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.946020 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:21Z","lastTransitionTime":"2026-01-28T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.946197 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.960031 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.974518 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:21 crc kubenswrapper[5001]: I0128 17:16:21.986521 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:21Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.048898 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.049172 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.049252 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.049335 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.049423 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.151620 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.151677 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.151685 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.151698 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.151707 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.178031 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.183240 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3cd579b1-57ae-4f44-85b5-53b6c746078b-cni-binary-copy\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.187736 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-cni-binary-copy\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.254577 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.254928 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.254945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.254963 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.254994 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: E0128 17:16:22.314305 5001 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.349874 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 17:16:22 crc kubenswrapper[5001]: E0128 17:16:22.355784 5001 projected.go:194] Error preparing data for projected volume kube-api-access-88h9g for pod openshift-multus/multus-7fgxj: failed to sync configmap cache: timed out waiting for the condition Jan 28 17:16:22 crc kubenswrapper[5001]: E0128 17:16:22.355862 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cd579b1-57ae-4f44-85b5-53b6c746078b-kube-api-access-88h9g podName:3cd579b1-57ae-4f44-85b5-53b6c746078b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:22.855843915 +0000 UTC m=+29.023632135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-88h9g" (UniqueName: "kubernetes.io/projected/3cd579b1-57ae-4f44-85b5-53b6c746078b-kube-api-access-88h9g") pod "multus-7fgxj" (UID: "3cd579b1-57ae-4f44-85b5-53b6c746078b") : failed to sync configmap cache: timed out waiting for the condition Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.357529 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.357563 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.357576 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.357591 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.357601 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.459749 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.459790 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.459801 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.459822 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.459835 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.487668 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.494052 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btg7l\" (UniqueName: \"kubernetes.io/projected/6b1b9ddb-6773-4b38-beb0-07d93f29f1af-kube-api-access-btg7l\") pod \"multus-additional-cni-plugins-dhcr2\" (UID: \"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\") " pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.561605 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:59:52.325190591 +0000 UTC Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.562190 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.562246 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.562260 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.562282 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.562310 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.603941 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" Jan 28 17:16:22 crc kubenswrapper[5001]: W0128 17:16:22.620710 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b1b9ddb_6773_4b38_beb0_07d93f29f1af.slice/crio-922368cdea8f4a9c53df272890e59cba521f02947ca598dde9645eda943e8214 WatchSource:0}: Error finding container 922368cdea8f4a9c53df272890e59cba521f02947ca598dde9645eda943e8214: Status 404 returned error can't find the container with id 922368cdea8f4a9c53df272890e59cba521f02947ca598dde9645eda943e8214 Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.666296 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.666641 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.666656 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.666676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.666689 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.765089 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6" exitCode=0 Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.765162 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.770907 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.770937 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.770948 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.770963 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.770988 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.771695 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerStarted","Data":"922368cdea8f4a9c53df272890e59cba521f02947ca598dde9645eda943e8214"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.781090 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.793337 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.808734 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.823097 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.838290 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.855518 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.867281 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.873431 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.873468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.873480 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.873497 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.873508 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.886171 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.904117 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.917095 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.920463 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88h9g\" (UniqueName: \"kubernetes.io/projected/3cd579b1-57ae-4f44-85b5-53b6c746078b-kube-api-access-88h9g\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.927637 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88h9g\" (UniqueName: \"kubernetes.io/projected/3cd579b1-57ae-4f44-85b5-53b6c746078b-kube-api-access-88h9g\") pod \"multus-7fgxj\" (UID: \"3cd579b1-57ae-4f44-85b5-53b6c746078b\") " pod="openshift-multus/multus-7fgxj" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.930922 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.943536 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.956220 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:22Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.975426 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.975469 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.975480 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.975497 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:22 crc kubenswrapper[5001]: I0128 17:16:22.975511 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:22Z","lastTransitionTime":"2026-01-28T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.077169 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.077448 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.077457 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.077470 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.077478 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.133605 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7fgxj" Jan 28 17:16:23 crc kubenswrapper[5001]: W0128 17:16:23.145739 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd579b1_57ae_4f44_85b5_53b6c746078b.slice/crio-7bbcfa9e418cf39b56c1c89ee32bd59d0394bea7c34de62a62c518117eeece67 WatchSource:0}: Error finding container 7bbcfa9e418cf39b56c1c89ee32bd59d0394bea7c34de62a62c518117eeece67: Status 404 returned error can't find the container with id 7bbcfa9e418cf39b56c1c89ee32bd59d0394bea7c34de62a62c518117eeece67 Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.180308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.180535 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.180544 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.180559 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.180569 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.283517 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.283543 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.283550 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.283563 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.283571 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.386026 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.386541 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.386552 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.386566 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.386575 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.488582 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.488618 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.488629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.488645 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.488656 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.562018 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:41:44.630280414 +0000 UTC Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.590667 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.590702 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.590710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.590723 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.590732 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.593910 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.593997 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.594062 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.594200 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.593921 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.594322 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.621190 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.621220 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.621228 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.621241 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.621251 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.636327 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.639468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.639493 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.639501 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.639513 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.639523 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.651224 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.653944 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.653993 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.654002 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.654017 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.654028 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.666713 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.669639 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.669681 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.669690 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.669705 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.669714 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.682368 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.685188 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.685226 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.685235 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.685253 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.685266 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.697139 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: E0128 17:16:23.697313 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.699221 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.699328 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.699346 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.699371 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.699384 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.776331 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b1b9ddb-6773-4b38-beb0-07d93f29f1af" containerID="bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1" exitCode=0 Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.776685 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerDied","Data":"bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.778192 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerStarted","Data":"6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.778245 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerStarted","Data":"7bbcfa9e418cf39b56c1c89ee32bd59d0394bea7c34de62a62c518117eeece67"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.783029 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.783057 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.783096 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.783108 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.783119 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.783129 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.789887 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.799604 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.801785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.801895 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.801966 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.802097 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.802159 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.813321 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.836167 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.848017 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.865319 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.877235 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.888601 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.902239 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.908361 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.908404 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.908411 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.908424 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.908433 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:23Z","lastTransitionTime":"2026-01-28T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.914824 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.928993 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.944470 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.961665 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.973432 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.984753 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:23 crc kubenswrapper[5001]: I0128 17:16:23.995457 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:23Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.007281 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.010646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.010686 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.010698 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.010715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.011106 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.020644 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.031202 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.048185 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.062205 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.071835 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.084577 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.099996 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.113211 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.113245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.113253 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.113267 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.113276 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.116826 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.129428 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.215546 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.215599 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.215611 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.215628 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.215640 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.296319 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bzc7t"] Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.296730 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.300067 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.300160 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.300290 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.300652 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.312639 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.317668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.317702 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.317711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.317729 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.317740 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.327062 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.334555 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9e939ff-2430-40ba-895c-51e6dc6561e4-host\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.334603 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9e939ff-2430-40ba-895c-51e6dc6561e4-serviceca\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.334621 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5xm\" (UniqueName: \"kubernetes.io/projected/b9e939ff-2430-40ba-895c-51e6dc6561e4-kube-api-access-cb5xm\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.338144 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.350252 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.363841 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.369523 5001 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.370109 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mqgwk/status\": read tcp 38.102.83.30:39164->38.102.83.30:6443: use of closed network connection" Jan 28 17:16:24 crc kubenswrapper[5001]: W0128 17:16:24.370823 5001 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 17:16:24 crc kubenswrapper[5001]: W0128 17:16:24.370873 5001 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 28 17:16:24 crc kubenswrapper[5001]: W0128 17:16:24.372155 5001 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 28 17:16:24 crc kubenswrapper[5001]: W0128 17:16:24.372277 5001 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.406503 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.419943 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.419991 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.420002 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.420019 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.420031 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.420670 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.430425 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.435511 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9e939ff-2430-40ba-895c-51e6dc6561e4-serviceca\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.435559 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5xm\" (UniqueName: \"kubernetes.io/projected/b9e939ff-2430-40ba-895c-51e6dc6561e4-kube-api-access-cb5xm\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.435617 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9e939ff-2430-40ba-895c-51e6dc6561e4-host\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.435695 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9e939ff-2430-40ba-895c-51e6dc6561e4-host\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.436850 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9e939ff-2430-40ba-895c-51e6dc6561e4-serviceca\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.446779 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.453377 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5xm\" (UniqueName: \"kubernetes.io/projected/b9e939ff-2430-40ba-895c-51e6dc6561e4-kube-api-access-cb5xm\") pod \"node-ca-bzc7t\" (UID: \"b9e939ff-2430-40ba-895c-51e6dc6561e4\") " pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.467057 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.477289 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.490387 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.504954 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.522552 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.522632 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.522647 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.522672 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.522690 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.563047 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:30:27.23475528 +0000 UTC Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.607076 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.611120 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bzc7t" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.620162 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.624988 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.625018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.625030 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.625045 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.625055 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.636574 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.650861 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.664323 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.683901 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.694933 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.713870 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.727531 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.727569 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.727579 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.727593 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.727603 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.731352 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.744915 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.757841 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.770688 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.779294 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.787687 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bzc7t" event={"ID":"b9e939ff-2430-40ba-895c-51e6dc6561e4","Type":"ContainerStarted","Data":"3d07f6e224021978ecf0d72d9b146c397485b9819313dc26c66de46f6adb84c3"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.789275 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b1b9ddb-6773-4b38-beb0-07d93f29f1af" containerID="6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2" exitCode=0 Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.789360 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerDied","Data":"6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.791794 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.816445 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.829468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.829497 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.829505 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.829518 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.829527 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.854910 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.907593 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.934198 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.934232 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.934240 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.934255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.934263 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:24Z","lastTransitionTime":"2026-01-28T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.944598 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:24 crc kubenswrapper[5001]: I0128 17:16:24.973738 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:24Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.015759 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.036406 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.036445 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.036458 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.036476 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.036487 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.057274 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.101378 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.136678 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.138518 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.138545 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.138554 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.138568 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.138577 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.174160 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.215348 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.241247 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.241285 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.241297 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.241314 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.241327 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.256308 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.292792 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.338510 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.343130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.343161 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.343171 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.343186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.343199 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.352766 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.445903 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.445986 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.445999 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.446014 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.446025 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.548808 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.548855 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.548869 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.548889 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.548902 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.563252 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:09:06.246145172 +0000 UTC Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.575853 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.616547 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:25 crc kubenswrapper[5001]: E0128 17:16:25.616928 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.616604 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:25 crc kubenswrapper[5001]: E0128 17:16:25.617161 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.616604 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:25 crc kubenswrapper[5001]: E0128 17:16:25.617260 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.654203 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.654244 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.654254 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.654269 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.654279 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.757731 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.757764 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.757774 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.757790 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.757802 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.793417 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bzc7t" event={"ID":"b9e939ff-2430-40ba-895c-51e6dc6561e4","Type":"ContainerStarted","Data":"09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.795895 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b1b9ddb-6773-4b38-beb0-07d93f29f1af" containerID="21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe" exitCode=0 Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.795958 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerDied","Data":"21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.801264 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.810919 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.811379 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.824299 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.841774 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.854589 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.861054 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.861100 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.861111 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.861129 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.861140 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.861550 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.874188 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.887611 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.903630 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.915765 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.930281 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.941608 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.954377 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.963254 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.963301 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.963313 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.963333 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.963345 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:25Z","lastTransitionTime":"2026-01-28T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.966444 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.978421 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:25 crc kubenswrapper[5001]: I0128 17:16:25.991495 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:25Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.019888 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.058638 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.065796 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.065833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.065844 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.065861 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.065877 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.099318 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.136995 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.168519 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.168573 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.168592 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.168615 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.168647 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.178479 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.216236 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.256622 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.271303 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.271338 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.271349 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.271365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.271377 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.294075 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.347851 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.373490 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.373532 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.373549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.373567 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.373578 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.385545 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.418767 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.459269 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.478878 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.478938 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.478962 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.479041 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.479068 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.500897 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.533082 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.563472 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:54:39.038135821 +0000 UTC Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.582288 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.582316 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.582326 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.582339 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.582348 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.685787 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.685835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.685852 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.685876 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.685892 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.789402 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.789450 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.789467 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.789500 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.789526 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.812692 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b1b9ddb-6773-4b38-beb0-07d93f29f1af" containerID="5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6" exitCode=0 Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.812780 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerDied","Data":"5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.826643 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.840949 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.858659 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.871995 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.891741 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.891793 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.891811 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.891836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.891855 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.895666 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.907732 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.919757 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.931428 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.949553 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.960149 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.978693 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:26Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.994284 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.994329 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.994342 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.994357 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:26 crc kubenswrapper[5001]: I0128 17:16:26.994367 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:26Z","lastTransitionTime":"2026-01-28T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.015442 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.056291 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.096834 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.097422 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.097451 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.097459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.097472 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.097481 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.200244 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.200327 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.200348 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.200412 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.200429 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.302558 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.302599 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.302609 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.302627 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.302638 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.404801 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.404843 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.404851 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.404865 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.404882 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.509088 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.509229 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.509313 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.509464 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.509550 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.563773 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:35:07.610875618 +0000 UTC Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.593592 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.593654 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:27 crc kubenswrapper[5001]: E0128 17:16:27.593726 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.593801 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:27 crc kubenswrapper[5001]: E0128 17:16:27.593794 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:27 crc kubenswrapper[5001]: E0128 17:16:27.593856 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.612070 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.612104 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.612115 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.612130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.612140 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.714651 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.714695 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.714710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.714730 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.714745 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.816239 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.816275 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.816286 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.816304 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.816318 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.818172 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b1b9ddb-6773-4b38-beb0-07d93f29f1af" containerID="f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff" exitCode=0 Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.818220 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerDied","Data":"f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.830841 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.854943 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.866802 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.878932 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.889641 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.900600 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.918668 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.919380 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.919429 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.919445 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.919466 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.919481 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:27Z","lastTransitionTime":"2026-01-28T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.929146 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.940059 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.954593 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.967292 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.981213 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:27 crc kubenswrapper[5001]: I0128 17:16:27.990884 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:27Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.002931 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.022045 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.022088 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.022100 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.022118 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.022132 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.127687 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.127722 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.127730 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.127743 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.127752 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.224044 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.230304 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.230349 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.230359 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.230377 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.230390 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.238116 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.248474 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.265132 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.277130 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.290079 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.301175 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.332591 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.332630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.332640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.332655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.332664 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.332801 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.345292 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.354819 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.387176 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.401887 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.414706 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.426441 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.434473 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.434516 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.434527 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.434544 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.434562 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.439150 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.537466 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.537535 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.537548 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.537571 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.537591 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.564788 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:51:43.948325883 +0000 UTC Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.640882 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.640937 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.640954 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.640999 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.641011 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.743236 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.743287 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.743312 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.743334 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.743348 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.823831 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b1b9ddb-6773-4b38-beb0-07d93f29f1af" containerID="5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8" exitCode=0 Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.823881 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerDied","Data":"5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.829734 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.830121 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.830170 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.842380 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.846373 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.846420 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.846433 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.846455 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.846471 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.860509 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.862152 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.863568 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.888763 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.905046 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.923288 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.936085 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.949334 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.949377 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.949388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.949403 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.949412 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:28Z","lastTransitionTime":"2026-01-28T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.964065 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.977124 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:28 crc kubenswrapper[5001]: I0128 17:16:28.990309 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.002753 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:28Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.021998 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.036634 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.048094 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.051816 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.051845 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.051853 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.051867 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.051876 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.060715 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.072240 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.085014 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.100314 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.113960 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.126012 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.136887 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.147464 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.153599 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.153636 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.153647 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.153665 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.153676 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.160463 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.171623 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.183768 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.215188 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.254252 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.255997 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.256048 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.256065 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.256086 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.256101 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.283414 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.283590 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:16:45.283566824 +0000 UTC m=+51.451355054 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.303224 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.335222 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.358632 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.358852 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.358952 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.359077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.359156 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.385162 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.385551 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.385396 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.385854 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.385868 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.385800 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.385918 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:45.385902345 +0000 UTC m=+51.553690575 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.385632 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.386075 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.386167 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:45.386124741 +0000 UTC m=+51.553912971 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.386268 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.386295 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.386311 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.386376 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:45.386352577 +0000 UTC m=+51.554140817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.387139 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.387262 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:16:45.387228332 +0000 UTC m=+51.555016622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.464702 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.464794 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.464830 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.464846 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.464858 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.565905 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:43:27.779317809 +0000 UTC Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.567344 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.567388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.567396 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.567410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.567418 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.593471 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.593585 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.593642 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.593773 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.593866 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:29 crc kubenswrapper[5001]: E0128 17:16:29.593914 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.670131 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.670196 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.670212 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.670232 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.670243 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.772812 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.772851 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.772859 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.772873 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.772884 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.839677 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" event={"ID":"6b1b9ddb-6773-4b38-beb0-07d93f29f1af","Type":"ContainerStarted","Data":"158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.839717 5001 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.855593 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.871490 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.875011 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.875055 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.875067 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.875085 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.875098 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.884433 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.899819 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.916435 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.929712 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.941088 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.953930 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.967255 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.978208 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.982093 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.982127 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.982141 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.982160 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.982171 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:29Z","lastTransitionTime":"2026-01-28T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:29 crc kubenswrapper[5001]: I0128 17:16:29.995776 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:29Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.007915 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:30Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.018915 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:30Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.030833 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:30Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.084861 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.084912 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.084924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.084938 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.084952 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.187635 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.187677 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.187687 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.187703 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.187719 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.289794 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.289867 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.289885 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.289930 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.289942 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.392786 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.392832 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.392843 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.392858 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.392902 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.495759 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.495807 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.495816 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.495833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.495844 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.566444 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:04:25.644478369 +0000 UTC Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.598206 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.598349 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.598372 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.598401 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.598424 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.651418 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.700574 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.700610 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.700627 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.700649 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.700666 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.802565 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.802617 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.802627 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.802640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.802649 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.905411 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.905464 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.905474 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.905487 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:30 crc kubenswrapper[5001]: I0128 17:16:30.905496 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:30Z","lastTransitionTime":"2026-01-28T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.007762 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.007821 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.007833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.007868 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.007881 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.110156 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.110204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.110215 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.110232 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.110243 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.212922 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.213018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.213031 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.213047 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.213059 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.315619 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.315665 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.315675 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.315691 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.315702 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.417585 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.417631 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.417641 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.417655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.417665 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.520524 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.520565 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.520574 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.520587 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.520595 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.567236 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:04:39.789907987 +0000 UTC Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.593680 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:31 crc kubenswrapper[5001]: E0128 17:16:31.593842 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.594119 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:31 crc kubenswrapper[5001]: E0128 17:16:31.594298 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.594424 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:31 crc kubenswrapper[5001]: E0128 17:16:31.594526 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.623075 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.623110 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.623120 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.623137 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.623147 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.726124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.726170 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.726186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.726206 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.726220 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.828837 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.828893 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.828904 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.828921 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.828933 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.846859 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/0.log" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.849813 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c" exitCode=1 Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.849853 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.850609 5001 scope.go:117] "RemoveContainer" containerID="16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.866331 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.879413 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.892953 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.906790 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.922047 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.931459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.931498 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.931533 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.931555 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.931570 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:31Z","lastTransitionTime":"2026-01-28T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.939526 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.950840 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.970550 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:31Z\\\",\\\"message\\\":\\\"ector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407069 6336 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407127 6336 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407574 6336 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 17:16:31.407619 6336 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 17:16:31.407633 6336 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 17:16:31.407650 6336 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 17:16:31.407672 6336 factory.go:656] Stopping watch factory\\\\nI0128 17:16:31.407703 6336 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 17:16:31.407705 6336 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 17:16:31.407714 6336 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 17:16:31.407734 6336 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.985650 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:31 crc kubenswrapper[5001]: I0128 17:16:31.995799 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:31Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.007376 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.021883 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.032414 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.034260 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.034282 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.034293 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.034309 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.034319 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.048964 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.137046 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.137097 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.137112 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.137127 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.137138 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.239308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.239344 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.239356 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.239371 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.239383 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.341691 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.341729 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.341738 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.341749 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.341758 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.444727 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.444779 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.444796 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.444813 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.445109 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.547044 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.547083 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.547097 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.547112 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.547122 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.568333 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:56:07.064717563 +0000 UTC Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.649745 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.649812 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.649821 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.649837 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.649846 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.752783 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.752826 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.752835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.752850 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.752861 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.789289 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd"] Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.790030 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.791961 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.792134 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.806079 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.818386 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.820574 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8da70482-1d5a-4149-95f7-0863485f6c06-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.820642 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcspr\" (UniqueName: \"kubernetes.io/projected/8da70482-1d5a-4149-95f7-0863485f6c06-kube-api-access-bcspr\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.820773 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8da70482-1d5a-4149-95f7-0863485f6c06-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.820851 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8da70482-1d5a-4149-95f7-0863485f6c06-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.830195 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.844597 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.854575 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.854823 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.854942 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.855078 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.855200 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.854915 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/1.log" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.856249 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/0.log" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.856863 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.858859 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04" exitCode=1 Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.858898 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.858933 5001 scope.go:117] "RemoveContainer" containerID="16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.859572 5001 scope.go:117] "RemoveContainer" containerID="58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04" Jan 28 17:16:32 crc kubenswrapper[5001]: E0128 17:16:32.859812 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.872633 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.884183 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.901775 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:31Z\\\",\\\"message\\\":\\\"ector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407069 6336 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407127 6336 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407574 6336 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 17:16:31.407619 6336 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 17:16:31.407633 6336 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 17:16:31.407650 6336 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 17:16:31.407672 6336 factory.go:656] Stopping watch factory\\\\nI0128 17:16:31.407703 6336 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 17:16:31.407705 6336 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 17:16:31.407714 6336 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 17:16:31.407734 6336 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.913002 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.922294 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8da70482-1d5a-4149-95f7-0863485f6c06-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.922550 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8da70482-1d5a-4149-95f7-0863485f6c06-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.922763 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8da70482-1d5a-4149-95f7-0863485f6c06-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.923273 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcspr\" (UniqueName: \"kubernetes.io/projected/8da70482-1d5a-4149-95f7-0863485f6c06-kube-api-access-bcspr\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.923397 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8da70482-1d5a-4149-95f7-0863485f6c06-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.923413 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8da70482-1d5a-4149-95f7-0863485f6c06-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.925320 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.927576 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8da70482-1d5a-4149-95f7-0863485f6c06-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.939490 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcspr\" (UniqueName: \"kubernetes.io/projected/8da70482-1d5a-4149-95f7-0863485f6c06-kube-api-access-bcspr\") pod \"ovnkube-control-plane-749d76644c-q7lxd\" (UID: \"8da70482-1d5a-4149-95f7-0863485f6c06\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.941606 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.951646 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.958662 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.958697 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.958708 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.958726 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.958739 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:32Z","lastTransitionTime":"2026-01-28T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.964887 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.975716 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:32 crc kubenswrapper[5001]: I0128 17:16:32.991418 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:32Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.002281 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.012751 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.029852 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:31Z\\\",\\\"message\\\":\\\"ector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407069 6336 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407127 6336 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407574 6336 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 17:16:31.407619 6336 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 17:16:31.407633 6336 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 17:16:31.407650 6336 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 17:16:31.407672 6336 factory.go:656] Stopping watch factory\\\\nI0128 17:16:31.407703 6336 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 17:16:31.407705 6336 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 17:16:31.407714 6336 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 17:16:31.407734 6336 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.049244 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.061368 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.061410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.061422 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.061440 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.061458 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.062783 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.076220 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.089873 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.101548 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.104207 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.120759 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.133116 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.146421 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.162046 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.163247 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.163277 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.163287 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.163311 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.163323 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.174583 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.186177 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.197936 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.265772 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.265818 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.265833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.265853 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.265868 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.368410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.368444 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.368455 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.368472 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.368484 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.470366 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.470408 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.470418 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.470433 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.470443 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.568826 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:30:01.396800037 +0000 UTC Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.572624 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.572655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.572666 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.572682 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.572695 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.593142 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.593262 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.593569 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.593709 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.593574 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.593839 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.675234 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.675269 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.675279 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.675295 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.675307 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.752515 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.752632 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.752652 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.752894 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.752907 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.765824 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.769896 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.769927 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.769936 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.769951 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.769961 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.782470 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.785597 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.785627 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.785637 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.785652 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.785661 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.800601 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.804510 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.804580 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.804592 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.804609 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.804621 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.818524 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.822226 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.822273 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.822282 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.822296 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.822306 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.837259 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.837392 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.838903 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.838932 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.838940 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.838953 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.838961 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.864758 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" event={"ID":"8da70482-1d5a-4149-95f7-0863485f6c06","Type":"ContainerStarted","Data":"00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.864803 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" event={"ID":"8da70482-1d5a-4149-95f7-0863485f6c06","Type":"ContainerStarted","Data":"1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.864813 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" event={"ID":"8da70482-1d5a-4149-95f7-0863485f6c06","Type":"ContainerStarted","Data":"79e4ced2d48fd058892873dba259f18348506a0c32066b9a63424e24195e8a29"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.866693 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/1.log" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.869769 5001 scope.go:117] "RemoveContainer" containerID="58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04" Jan 28 17:16:33 crc kubenswrapper[5001]: E0128 17:16:33.869904 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.880182 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.892409 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.908991 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d31cde8bcdb630620e678496f008f3a75548400650719deaa8cef5586f898c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:31Z\\\",\\\"message\\\":\\\"ector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407069 6336 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407127 6336 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 17:16:31.407574 6336 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 17:16:31.407619 6336 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 17:16:31.407633 6336 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 17:16:31.407650 6336 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 17:16:31.407672 6336 factory.go:656] Stopping watch factory\\\\nI0128 17:16:31.407703 6336 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 17:16:31.407705 6336 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 17:16:31.407714 6336 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 17:16:31.407734 6336 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.919869 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.935217 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.941611 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.941640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.941649 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.941663 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.941673 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:33Z","lastTransitionTime":"2026-01-28T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.949065 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.959734 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.972727 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:33 crc kubenswrapper[5001]: I0128 17:16:33.984635 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.000250 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:33Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.015835 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.029858 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.042815 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.044028 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.044055 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.044062 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.044075 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.044086 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.056136 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.067331 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.081117 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.091198 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.104095 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.119321 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.129353 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.143629 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.146319 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.146375 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.146407 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.146429 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.146445 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.158076 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.173444 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.185875 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.200904 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.213417 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.227369 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.240146 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.249614 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.249659 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.249668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.249690 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.249703 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.255460 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.279280 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.352691 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.352744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.352756 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.352780 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.352795 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.455613 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.455669 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.455681 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.455704 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.455719 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.558685 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.558735 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.558744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.558765 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.558777 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.569943 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:48:28.142832101 +0000 UTC Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.592284 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rnn76"] Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.592756 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:34 crc kubenswrapper[5001]: E0128 17:16:34.592822 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.614632 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.631507 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.638946 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sds\" (UniqueName: \"kubernetes.io/projected/2b5caa8d-b144-45a6-b334-e9e77c13064d-kube-api-access-d2sds\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.639088 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.644049 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.657278 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.661294 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.661354 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.661373 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.661398 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.661416 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.670279 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.684780 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.699369 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.710765 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.724585 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.737425 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.739861 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.739945 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sds\" (UniqueName: \"kubernetes.io/projected/2b5caa8d-b144-45a6-b334-e9e77c13064d-kube-api-access-d2sds\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:34 crc kubenswrapper[5001]: E0128 17:16:34.740024 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:34 crc kubenswrapper[5001]: E0128 17:16:34.740088 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:16:35.240068836 +0000 UTC m=+41.407857066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.751829 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.757651 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sds\" (UniqueName: \"kubernetes.io/projected/2b5caa8d-b144-45a6-b334-e9e77c13064d-kube-api-access-d2sds\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.764275 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.764310 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.764321 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.764336 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.764348 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.767477 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.781718 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.795233 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.813799 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.824472 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.836680 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.845884 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.859351 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.866284 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.866337 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.866354 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.866371 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.866381 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.873603 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.882910 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.893293 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.905959 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.925662 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.941528 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.954876 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.968273 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.969417 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.969466 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.969477 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.969492 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.969501 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:34Z","lastTransitionTime":"2026-01-28T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:34 crc kubenswrapper[5001]: I0128 17:16:34.988755 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:34Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.039215 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:35Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.051222 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:35Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.061179 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:35Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.071203 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.071244 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.071257 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.071273 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.071285 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.082929 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:35Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.173668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.173717 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.173729 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.173753 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.173767 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.243721 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:35 crc kubenswrapper[5001]: E0128 17:16:35.243948 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:35 crc kubenswrapper[5001]: E0128 17:16:35.244124 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:16:36.244086251 +0000 UTC m=+42.411874531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.275820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.275875 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.275890 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.275933 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.275951 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.378630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.378676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.378687 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.378734 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.378754 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.481715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.481754 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.481766 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.481781 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.481790 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.570469 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:21:42.863501684 +0000 UTC Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.585096 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.585136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.585146 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.585161 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.585173 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.593404 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.593513 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:35 crc kubenswrapper[5001]: E0128 17:16:35.593711 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.593767 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:35 crc kubenswrapper[5001]: E0128 17:16:35.593844 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:35 crc kubenswrapper[5001]: E0128 17:16:35.593928 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.688502 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.688555 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.688572 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.688834 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.688854 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.792130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.792476 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.792491 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.792510 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.792524 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.895507 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.895556 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.895566 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.895585 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.895596 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.998356 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.998401 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.998417 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.998437 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:35 crc kubenswrapper[5001]: I0128 17:16:35.998452 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:35Z","lastTransitionTime":"2026-01-28T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.100303 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.100364 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.100375 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.100392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.100403 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.202289 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.202328 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.202339 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.202355 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.202367 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.254881 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:36 crc kubenswrapper[5001]: E0128 17:16:36.255023 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:36 crc kubenswrapper[5001]: E0128 17:16:36.255084 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:16:38.255066692 +0000 UTC m=+44.422854932 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.304827 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.304887 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.304903 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.304949 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.304964 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.407206 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.407235 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.407246 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.407261 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.407271 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.509726 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.509945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.510069 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.510154 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.510216 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.570928 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:27:07.031914877 +0000 UTC Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.593355 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:36 crc kubenswrapper[5001]: E0128 17:16:36.593504 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.612659 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.612920 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.613061 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.613157 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.613245 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.716012 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.716274 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.716360 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.716451 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.716547 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.819523 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.819802 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.819893 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.819960 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.820047 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.922756 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.922811 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.922820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.922835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:36 crc kubenswrapper[5001]: I0128 17:16:36.922846 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:36Z","lastTransitionTime":"2026-01-28T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.025618 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.025662 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.025676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.025694 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.025708 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.127730 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.127791 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.127808 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.127830 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.127848 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.229806 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.229869 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.229883 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.229921 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.229933 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.332289 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.332367 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.332384 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.332407 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.332422 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.435143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.435173 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.435181 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.435194 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.435203 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.537549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.537585 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.537594 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.537608 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.537619 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.571546 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:29:58.784715219 +0000 UTC Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.593033 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.593060 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.593111 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:37 crc kubenswrapper[5001]: E0128 17:16:37.593147 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:37 crc kubenswrapper[5001]: E0128 17:16:37.593232 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:37 crc kubenswrapper[5001]: E0128 17:16:37.593329 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.640119 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.640416 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.640494 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.640608 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.640678 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.742686 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.743165 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.743245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.743317 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.743380 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.845805 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.845841 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.845853 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.845870 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.845881 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.948003 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.948034 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.948042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.948056 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:37 crc kubenswrapper[5001]: I0128 17:16:37.948064 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:37Z","lastTransitionTime":"2026-01-28T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.050571 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.050607 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.050617 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.050629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.050638 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.153867 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.154155 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.154191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.154209 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.154219 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.256842 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.256883 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.256892 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.256908 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.256918 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.276810 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:38 crc kubenswrapper[5001]: E0128 17:16:38.277012 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:38 crc kubenswrapper[5001]: E0128 17:16:38.277071 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:16:42.277055268 +0000 UTC m=+48.444843498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.359196 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.359233 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.359245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.359261 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.359272 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.482311 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.482562 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.482642 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.482709 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.482768 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.572651 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:14:35.070254159 +0000 UTC Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.584407 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.584627 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.584713 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.584820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.584905 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.593769 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:38 crc kubenswrapper[5001]: E0128 17:16:38.593945 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.686825 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.687084 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.687110 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.687124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.687134 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.789527 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.789604 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.789622 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.789645 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.789662 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.892144 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.892188 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.892198 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.892214 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.892225 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.994307 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.994345 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.994354 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.994369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:38 crc kubenswrapper[5001]: I0128 17:16:38.994379 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:38Z","lastTransitionTime":"2026-01-28T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.097084 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.097121 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.097129 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.097145 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.097153 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.200363 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.200475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.200493 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.200511 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.200523 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.303107 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.303141 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.303149 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.303164 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.303174 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.406393 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.406453 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.406468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.406487 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.406502 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.509205 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.509241 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.509249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.509263 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.509272 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.573215 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:49:10.059516034 +0000 UTC Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.593863 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.593927 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.593903 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:39 crc kubenswrapper[5001]: E0128 17:16:39.594050 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:39 crc kubenswrapper[5001]: E0128 17:16:39.594167 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:39 crc kubenswrapper[5001]: E0128 17:16:39.594212 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.611218 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.611263 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.611276 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.611291 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.611302 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.713746 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.713790 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.713801 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.713818 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.713829 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.816095 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.816140 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.816153 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.816171 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.816183 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.918542 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.918581 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.918592 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.918608 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:39 crc kubenswrapper[5001]: I0128 17:16:39.918620 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:39Z","lastTransitionTime":"2026-01-28T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.021087 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.021138 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.021154 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.021172 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.021184 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.123675 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.123708 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.123717 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.123730 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.123741 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.226197 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.226245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.226255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.226276 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.226286 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.328318 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.328355 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.328371 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.328395 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.328407 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.431539 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.431672 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.431692 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.431711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.431724 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.534638 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.534676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.534686 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.534703 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.534712 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.574361 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:19:02.495202731 +0000 UTC Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.593895 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:40 crc kubenswrapper[5001]: E0128 17:16:40.594081 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.637041 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.637277 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.637288 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.637301 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.637309 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.739350 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.739393 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.739406 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.739424 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.739436 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.842391 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.842439 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.842478 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.842500 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.842519 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.944620 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.944670 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.944689 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.944710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:40 crc kubenswrapper[5001]: I0128 17:16:40.944725 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:40Z","lastTransitionTime":"2026-01-28T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.047472 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.047539 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.047567 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.047598 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.047620 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.151477 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.151539 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.151562 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.151590 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.151614 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.254355 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.254393 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.254402 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.254415 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.254424 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.356320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.356405 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.356431 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.356445 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.356455 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.458613 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.458654 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.458664 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.458681 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.458692 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.562246 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.562284 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.562292 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.562304 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.562312 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.575526 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:09:14.639397533 +0000 UTC Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.593144 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.593284 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:41 crc kubenswrapper[5001]: E0128 17:16:41.593284 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.593345 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:41 crc kubenswrapper[5001]: E0128 17:16:41.593355 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:41 crc kubenswrapper[5001]: E0128 17:16:41.593437 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.664917 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.664958 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.664976 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.665014 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.665024 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.783924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.783976 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.783988 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.784021 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.784033 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.886461 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.886517 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.886530 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.886548 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.886560 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.989315 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.989365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.989374 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.989388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:41 crc kubenswrapper[5001]: I0128 17:16:41.989396 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:41Z","lastTransitionTime":"2026-01-28T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.092512 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.092587 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.092604 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.092634 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.092655 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.195466 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.195508 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.195521 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.195536 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.195547 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.297496 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.297534 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.297548 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.297565 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.297578 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.317655 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:42 crc kubenswrapper[5001]: E0128 17:16:42.317810 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:42 crc kubenswrapper[5001]: E0128 17:16:42.317902 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:16:50.317879452 +0000 UTC m=+56.485667742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.400420 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.400460 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.400469 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.400486 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.400495 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.502466 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.502516 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.502527 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.502543 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.502555 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.576503 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:24:13.273442489 +0000 UTC Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.593133 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:42 crc kubenswrapper[5001]: E0128 17:16:42.593304 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.604582 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.604630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.604641 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.604659 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.604670 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.706755 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.706806 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.706817 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.706836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.706848 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.808968 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.809046 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.809058 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.809074 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.809086 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.911493 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.911525 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.911534 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.911548 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:42 crc kubenswrapper[5001]: I0128 17:16:42.911556 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:42Z","lastTransitionTime":"2026-01-28T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.013905 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.013941 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.013951 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.013966 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.013976 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.116962 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.117015 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.117024 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.117037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.117046 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.219542 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.220255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.220294 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.220317 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.220337 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.322711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.322798 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.322813 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.322829 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.322839 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.425122 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.425191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.425204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.425220 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.425231 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.527563 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.527605 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.527615 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.527631 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.527642 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.577541 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:30:04.358005581 +0000 UTC Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.593942 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.593966 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:43 crc kubenswrapper[5001]: E0128 17:16:43.594138 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:43 crc kubenswrapper[5001]: E0128 17:16:43.594226 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.594282 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:43 crc kubenswrapper[5001]: E0128 17:16:43.594356 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.629909 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.629953 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.629964 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.629997 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.630008 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.733304 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.733368 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.733381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.733404 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.733419 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.835568 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.835634 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.835653 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.835679 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.835700 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.939369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.939416 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.939429 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.939449 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:43 crc kubenswrapper[5001]: I0128 17:16:43.939462 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:43Z","lastTransitionTime":"2026-01-28T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.042977 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.043032 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.043041 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.043055 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.043257 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.146592 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.146655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.146670 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.146703 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.146721 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.156040 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.156273 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.156379 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.156455 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.156518 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.168781 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.174418 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.174454 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.174464 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.174482 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.174500 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.186411 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.190639 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.190672 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.190683 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.190701 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.190715 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.204539 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.208320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.208373 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.208391 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.208422 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.208442 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.222108 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.225327 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.225363 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.225374 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.225392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.225405 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.238139 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.238267 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.249108 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.249350 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.249367 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.249392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.249409 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.352198 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.352246 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.352256 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.352271 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.352282 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.454978 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.455039 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.455050 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.455066 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.455079 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.557135 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.557189 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.557208 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.557229 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.557244 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.577806 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:20:27.235180841 +0000 UTC Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.593480 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:44 crc kubenswrapper[5001]: E0128 17:16:44.593976 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.594180 5001 scope.go:117] "RemoveContainer" containerID="58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.606611 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.620108 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.631488 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.649917 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.659610 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.660217 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.660259 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.660271 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.660290 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.660301 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.669672 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.681801 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.695085 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.708178 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.722337 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.733803 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.751122 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.763271 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.763302 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.763312 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.763327 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.763361 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.763845 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.774669 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.795055 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.809567 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.865099 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.865141 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.865152 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.865170 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.865198 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.901458 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/1.log" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.904564 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.904923 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.916163 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.929147 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.944925 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.956250 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.967438 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.967612 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.967688 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.967775 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.967847 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:44Z","lastTransitionTime":"2026-01-28T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.974917 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:44 crc kubenswrapper[5001]: I0128 17:16:44.996365 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:44Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.006877 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.019215 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.035889 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.051957 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.069662 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.071404 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.071433 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.071443 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.071462 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.071472 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.086712 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.100585 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.120095 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.136165 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.146128 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.174712 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.175026 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.175130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.175206 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.175271 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.277994 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.278032 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.278042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.278055 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.278064 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.350592 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.350823 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:17:17.350802949 +0000 UTC m=+83.518591179 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.381274 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.381585 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.381674 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.381757 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.381833 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.451470 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.451725 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.451636 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.451809 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452018 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452065 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.451908 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:17:17.451885756 +0000 UTC m=+83.619674036 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452251 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:17:17.452224335 +0000 UTC m=+83.620012565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452325 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452395 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452450 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452557 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:17:17.452527563 +0000 UTC m=+83.620315793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.451818 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.452740 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.452865 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.453008 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:17:17.452999576 +0000 UTC m=+83.620787806 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.484320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.484369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.484381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.484396 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.484405 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.578532 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:39:54.773554762 +0000 UTC Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.586798 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.586836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.586847 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.586863 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.586876 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.593625 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.593687 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.593847 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.594026 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.594129 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.594193 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.690733 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.690782 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.690795 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.690811 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.690823 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.792930 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.792965 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.792980 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.792997 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.793028 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.895676 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.895710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.895719 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.895735 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.895746 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.909620 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/2.log" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.910361 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/1.log" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.912937 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974" exitCode=1 Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.913677 5001 scope.go:117] "RemoveContainer" containerID="2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974" Jan 28 17:16:45 crc kubenswrapper[5001]: E0128 17:16:45.913840 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.914043 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974"} Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.914093 5001 scope.go:117] "RemoveContainer" containerID="58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.925751 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.935816 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.949113 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.959847 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.971356 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.982563 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.993261 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:45Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.997617 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.997686 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.997700 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.997717 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:45 crc kubenswrapper[5001]: I0128 17:16:45.997728 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:45Z","lastTransitionTime":"2026-01-28T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.010682 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f21169336cfc418a345c24ca3079e02ec0b84f832c2b4caf07286642377f04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"message\\\":\\\"-multus/multus-additional-cni-plugins-dhcr2\\\\nI0128 17:16:32.785224 6497 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 3.799625ms\\\\nI0128 17:16:32.785067 6497 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 17:16:32.785247 6497 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics for network=default\\\\nI0128 17:16:32.785251 6497 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 990.987µs)\\\\nF0128 17:16:32.785309 6497 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.022737 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.033345 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.044336 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.058663 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.070074 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.080643 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.092242 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.099740 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.099782 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.099794 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.099810 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.099822 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.107381 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.205701 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.205751 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.205761 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.205777 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.205788 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.307917 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.307966 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.307979 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.308014 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.308027 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.411126 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.411174 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.411199 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.411225 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.411242 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.513952 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.514013 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.514026 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.514043 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.514053 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.579496 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:24:03.090273744 +0000 UTC Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.594043 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:46 crc kubenswrapper[5001]: E0128 17:16:46.594206 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.615890 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.615923 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.615932 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.615946 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.615955 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.718726 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.718766 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.718777 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.718793 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.718802 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.820699 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.820761 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.820773 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.820791 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.820803 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.918696 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/2.log" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.921960 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.922012 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.922020 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.922032 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.922040 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:46Z","lastTransitionTime":"2026-01-28T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.923876 5001 scope.go:117] "RemoveContainer" containerID="2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974" Jan 28 17:16:46 crc kubenswrapper[5001]: E0128 17:16:46.924028 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.935142 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.945256 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.959564 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.971766 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.982639 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:46 crc kubenswrapper[5001]: I0128 17:16:46.994524 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:46Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.008303 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.019375 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.024324 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.024364 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.024374 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.024389 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.024400 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.029597 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.044804 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.056157 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.065840 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.077515 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.091343 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.100525 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.111754 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:47Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.126735 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.126772 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.126781 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.126795 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.126804 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.229415 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.229489 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.229513 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.229544 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.229570 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.331893 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.331936 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.331945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.331960 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.331970 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.434371 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.434616 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.434683 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.434785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.434855 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.537573 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.537675 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.537690 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.537715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.537734 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.580041 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:21:07.140514345 +0000 UTC Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.593380 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.593451 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.593457 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:47 crc kubenswrapper[5001]: E0128 17:16:47.593523 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:47 crc kubenswrapper[5001]: E0128 17:16:47.593648 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:47 crc kubenswrapper[5001]: E0128 17:16:47.593728 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.640498 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.640577 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.640588 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.640611 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.640625 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.743576 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.743629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.743639 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.743655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.743666 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.846587 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.846643 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.846656 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.846674 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.846686 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.948631 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.948671 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.948682 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.948697 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:47 crc kubenswrapper[5001]: I0128 17:16:47.948709 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:47Z","lastTransitionTime":"2026-01-28T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.050529 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.050564 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.050572 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.050585 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.050596 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.153256 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.153314 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.153325 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.153338 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.153348 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.255658 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.255710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.255722 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.255742 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.255755 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.357582 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.357612 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.357621 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.357634 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.357643 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.460320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.460379 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.460394 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.460416 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.460439 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.562595 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.562637 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.562646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.562660 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.562671 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.580390 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:50:17.854805218 +0000 UTC Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.594020 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:48 crc kubenswrapper[5001]: E0128 17:16:48.594244 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.664706 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.664752 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.664764 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.664779 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.664790 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.698698 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.708738 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.716502 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.729246 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.741120 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.754453 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.767225 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.767262 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.767275 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.767297 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.767309 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.767941 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.778202 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.790829 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.802359 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.817784 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.834304 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.845254 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.855676 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.866184 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.871226 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.871265 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.871276 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.871292 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.871304 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.875694 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.886899 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.897281 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:48Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.973176 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.973218 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.973229 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.973245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:48 crc kubenswrapper[5001]: I0128 17:16:48.973255 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:48Z","lastTransitionTime":"2026-01-28T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.075496 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.075544 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.075556 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.075574 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.075586 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.177417 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.177462 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.177473 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.177494 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.177506 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.280295 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.280343 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.280351 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.280364 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.280374 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.383056 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.383169 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.383186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.383210 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.383224 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.485630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.485682 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.485699 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.485719 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.485731 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.581280 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:09:02.379558642 +0000 UTC Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.588549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.588629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.588646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.588668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.588685 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.593907 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.593933 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.593922 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:49 crc kubenswrapper[5001]: E0128 17:16:49.594045 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:49 crc kubenswrapper[5001]: E0128 17:16:49.594110 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:49 crc kubenswrapper[5001]: E0128 17:16:49.594168 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.690857 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.690909 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.690924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.690945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.690962 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.793661 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.793717 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.793728 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.793744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.793755 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.896104 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.896143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.896176 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.896192 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.896200 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.998854 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.998917 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.998928 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.998947 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:49 crc kubenswrapper[5001]: I0128 17:16:49.998960 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:49Z","lastTransitionTime":"2026-01-28T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.100771 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.100801 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.100812 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.100827 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.100838 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.203167 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.203224 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.203234 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.203249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.203258 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.305522 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.305565 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.305598 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.305616 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.305628 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.401535 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:50 crc kubenswrapper[5001]: E0128 17:16:50.401715 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:50 crc kubenswrapper[5001]: E0128 17:16:50.401782 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:17:06.401764571 +0000 UTC m=+72.569552801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.408145 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.408186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.408204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.408221 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.408232 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.510425 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.510487 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.510503 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.510522 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.510537 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.582277 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:22:41.436522924 +0000 UTC Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.594152 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:50 crc kubenswrapper[5001]: E0128 17:16:50.594354 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.612417 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.612459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.612475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.612493 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.612505 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.715331 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.715381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.715394 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.715411 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.715425 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.817476 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.817526 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.817538 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.817556 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.817567 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.921186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.921223 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.921238 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.921252 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:50 crc kubenswrapper[5001]: I0128 17:16:50.921261 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:50Z","lastTransitionTime":"2026-01-28T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.024018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.024071 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.024084 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.024100 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.024112 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.126754 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.126794 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.126804 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.126820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.126831 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.229677 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.229715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.229724 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.229738 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.229748 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.331758 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.331810 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.331822 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.331836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.331851 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.435486 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.435565 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.435579 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.435597 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.435608 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.539109 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.539173 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.539191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.539217 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.539237 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.583275 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:24:26.172058169 +0000 UTC Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.593951 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.594052 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:51 crc kubenswrapper[5001]: E0128 17:16:51.594117 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.594187 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:51 crc kubenswrapper[5001]: E0128 17:16:51.594327 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:51 crc kubenswrapper[5001]: E0128 17:16:51.594499 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.641308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.641365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.641375 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.641392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.641403 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.744458 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.744523 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.744547 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.744575 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.744596 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.847114 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.847159 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.847169 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.847185 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.847197 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.950041 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.950081 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.950090 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.950103 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:51 crc kubenswrapper[5001]: I0128 17:16:51.950113 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:51Z","lastTransitionTime":"2026-01-28T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.052824 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.052876 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.052892 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.052911 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.052922 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.155840 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.156114 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.156200 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.156272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.156340 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.259100 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.259913 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.260031 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.260140 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.260222 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.363621 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.363655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.363665 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.363678 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.363687 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.465820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.465855 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.465865 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.465877 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.465886 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.567459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.567487 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.567496 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.567508 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.567517 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.584412 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:19:03.295275263 +0000 UTC Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.593873 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:52 crc kubenswrapper[5001]: E0128 17:16:52.594120 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.670561 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.670604 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.670629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.670653 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.670670 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.774030 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.774069 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.774077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.774092 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.774101 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.876367 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.876646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.876667 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.876688 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.876703 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.979400 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.979446 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.979457 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.979476 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:52 crc kubenswrapper[5001]: I0128 17:16:52.979486 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:52Z","lastTransitionTime":"2026-01-28T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.082569 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.082606 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.082615 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.082629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.082639 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.185370 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.185415 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.185426 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.185443 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.185453 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.288593 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.288661 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.288671 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.288690 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.288705 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.391488 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.391577 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.391593 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.391627 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.391644 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.494149 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.494183 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.494191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.494203 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.494213 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.584694 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:27:51.177768943 +0000 UTC Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.593040 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.593074 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.593146 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:53 crc kubenswrapper[5001]: E0128 17:16:53.593185 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:53 crc kubenswrapper[5001]: E0128 17:16:53.593336 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:53 crc kubenswrapper[5001]: E0128 17:16:53.593417 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.596721 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.596749 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.596759 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.596775 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.596786 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.699762 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.699886 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.699952 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.700044 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.700141 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.803434 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.803549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.803613 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.803646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.803667 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.906561 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.906612 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.906623 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.906640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:53 crc kubenswrapper[5001]: I0128 17:16:53.906651 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:53Z","lastTransitionTime":"2026-01-28T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.008571 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.008650 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.008668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.008694 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.008713 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.110577 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.110614 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.110626 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.110640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.110650 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.216792 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.216836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.216849 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.216872 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.216884 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.319208 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.319256 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.319276 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.319295 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.319542 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.421383 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.421418 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.421428 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.421441 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.421450 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.434931 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.434990 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.435002 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.435018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.435028 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.446536 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.449640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.449688 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.449697 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.449711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.449719 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.459644 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.462322 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.462347 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.462357 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.462370 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.462379 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.472262 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.475272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.475312 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.475324 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.475361 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.475371 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.485939 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.488842 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.488958 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.489065 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.489135 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.489475 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.499824 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.499934 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.523341 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.523384 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.523395 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.523413 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.523424 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.585324 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:54:55.792060805 +0000 UTC Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.593506 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:54 crc kubenswrapper[5001]: E0128 17:16:54.593869 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.604833 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.623592 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.626017 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.626046 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.626054 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.626067 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.626076 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.640437 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.653894 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.664671 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.691901 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.704346 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.715293 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.729299 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.729348 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.729362 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.729381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.729401 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.729606 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.744667 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.755955 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.769615 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.780201 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.792601 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.802320 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.811630 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.821120 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:16:54Z is after 2025-08-24T17:21:41Z" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.831820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.831855 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.831884 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.831898 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.831909 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.934357 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.934392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.934400 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.934413 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:54 crc kubenswrapper[5001]: I0128 17:16:54.934422 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:54Z","lastTransitionTime":"2026-01-28T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.037043 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.037080 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.037093 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.037108 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.037117 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.139786 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.140186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.140402 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.140614 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.140825 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.244454 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.244528 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.244544 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.244560 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.244574 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.346273 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.346541 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.346604 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.346674 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.346770 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.448816 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.449057 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.449170 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.449264 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.449364 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.551914 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.552188 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.552247 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.552320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.552379 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.585662 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:00:19.336376421 +0000 UTC Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.593965 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:55 crc kubenswrapper[5001]: E0128 17:16:55.594284 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.594167 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:55 crc kubenswrapper[5001]: E0128 17:16:55.594529 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.594068 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:55 crc kubenswrapper[5001]: E0128 17:16:55.594690 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.655037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.655085 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.655094 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.655108 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.655116 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.763800 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.764270 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.764476 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.764709 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.764875 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.867577 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.867611 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.867621 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.867636 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.867646 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.969564 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.969595 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.969605 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.969619 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:55 crc kubenswrapper[5001]: I0128 17:16:55.969629 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:55Z","lastTransitionTime":"2026-01-28T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.071422 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.071464 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.071475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.071491 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.071501 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.174142 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.174198 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.174213 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.174246 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.174268 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.277692 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.278101 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.278247 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.278366 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.278497 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.381109 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.381182 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.381201 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.381225 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.381242 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.483927 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.483970 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.483992 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.484006 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.484016 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.585914 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:20:57.162956421 +0000 UTC Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.586407 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.586448 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.586457 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.586471 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.586480 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.593819 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:56 crc kubenswrapper[5001]: E0128 17:16:56.593915 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.688240 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.688284 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.688297 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.688314 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.688327 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.790773 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.790808 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.790818 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.790835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.790845 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.893295 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.893338 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.893350 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.893389 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.893404 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.996154 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.996386 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.996450 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.996515 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:56 crc kubenswrapper[5001]: I0128 17:16:56.996575 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:56Z","lastTransitionTime":"2026-01-28T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.098918 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.098970 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.099007 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.099027 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.099043 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.201774 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.202125 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.202264 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.202395 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.202501 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.305292 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.305332 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.305344 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.305361 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.305373 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.411653 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.411716 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.411736 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.411764 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.411783 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.515151 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.515207 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.515224 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.515247 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.515264 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.586960 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:28:45.601742318 +0000 UTC Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.593293 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:57 crc kubenswrapper[5001]: E0128 17:16:57.593430 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.593489 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.593575 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:57 crc kubenswrapper[5001]: E0128 17:16:57.593686 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:57 crc kubenswrapper[5001]: E0128 17:16:57.593842 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.618774 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.618808 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.618820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.618837 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.618848 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.721641 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.721701 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.721712 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.721732 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.721742 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.824635 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.824673 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.824681 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.824696 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.824705 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.926276 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.926847 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.926860 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.926877 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:57 crc kubenswrapper[5001]: I0128 17:16:57.926890 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:57Z","lastTransitionTime":"2026-01-28T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.029797 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.029845 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.029860 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.029877 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.029889 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.131758 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.131793 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.131801 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.131813 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.131822 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.233820 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.233880 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.233892 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.233910 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.233922 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.335962 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.336019 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.336032 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.336047 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.336058 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.438406 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.438448 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.438460 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.438475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.438487 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.541077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.541299 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.541389 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.541463 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.541533 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.588735 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:16:09.955982108 +0000 UTC Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.593426 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:16:58 crc kubenswrapper[5001]: E0128 17:16:58.593581 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.644125 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.644173 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.644186 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.644207 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.644220 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.746835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.747148 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.747218 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.747281 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.747351 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.849240 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.849441 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.849499 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.849593 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.849687 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.951549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.951743 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.951811 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.951881 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:58 crc kubenswrapper[5001]: I0128 17:16:58.951945 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:58Z","lastTransitionTime":"2026-01-28T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.054720 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.054765 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.054776 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.054821 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.054833 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.157339 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.157379 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.157388 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.157404 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.157413 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.260093 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.260136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.260146 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.260161 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.260171 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.361840 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.362139 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.362217 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.362316 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.362414 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.464649 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.464868 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.464939 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.465089 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.465164 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.567165 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.567204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.567213 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.567227 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.567237 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.589405 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:53:51.818834156 +0000 UTC Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.593706 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.593728 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:16:59 crc kubenswrapper[5001]: E0128 17:16:59.593932 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.593767 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:16:59 crc kubenswrapper[5001]: E0128 17:16:59.594182 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:16:59 crc kubenswrapper[5001]: E0128 17:16:59.594329 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.669854 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.670140 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.670272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.670356 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.670449 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.772803 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.772838 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.772848 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.772863 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.772871 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.875143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.875195 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.875208 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.875225 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.875239 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.978328 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.978392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.978404 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.978420 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:16:59 crc kubenswrapper[5001]: I0128 17:16:59.978431 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:16:59Z","lastTransitionTime":"2026-01-28T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.081374 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.081407 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.081416 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.081431 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.081442 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.183416 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.183451 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.183459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.183475 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.183486 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.285698 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.285738 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.285745 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.285760 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.285769 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.388133 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.388179 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.388192 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.388209 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.388220 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.490826 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.490862 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.490870 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.490883 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.490892 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.589561 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:05:31.647970556 +0000 UTC Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.594113 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:00 crc kubenswrapper[5001]: E0128 17:17:00.594502 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.594662 5001 scope.go:117] "RemoveContainer" containerID="2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974" Jan 28 17:17:00 crc kubenswrapper[5001]: E0128 17:17:00.594797 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.596096 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.596202 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.596267 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.596347 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.596442 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.698729 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.699103 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.699216 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.699307 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.699378 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.801611 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.801668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.801684 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.801705 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.801721 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.904492 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.904965 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.905060 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.905127 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:00 crc kubenswrapper[5001]: I0128 17:17:00.905238 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:00Z","lastTransitionTime":"2026-01-28T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.008074 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.008112 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.008121 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.008136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.008146 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.110957 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.111014 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.111024 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.111040 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.111052 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.212903 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.212954 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.213004 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.213045 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.213056 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.315558 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.315597 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.315608 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.315623 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.315635 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.418193 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.418243 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.418253 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.418269 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.418281 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.520763 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.520805 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.520817 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.520832 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.520843 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.590429 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:16:29.763024083 +0000 UTC Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.593730 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.593730 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.593825 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:01 crc kubenswrapper[5001]: E0128 17:17:01.593927 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:01 crc kubenswrapper[5001]: E0128 17:17:01.594056 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:01 crc kubenswrapper[5001]: E0128 17:17:01.594146 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.623527 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.623557 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.623568 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.623583 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.623595 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.725789 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.725828 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.725838 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.725856 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.725867 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.828916 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.828957 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.828969 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.829018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.829036 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.931787 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.931826 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.931839 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.931854 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:01 crc kubenswrapper[5001]: I0128 17:17:01.931868 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:01Z","lastTransitionTime":"2026-01-28T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.035241 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.035294 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.035307 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.035326 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.035341 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.137257 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.137317 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.137329 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.137347 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.137364 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.239936 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.240004 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.240018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.240036 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.240047 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.342796 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.342877 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.342894 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.342919 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.342933 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.445143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.445181 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.445189 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.445209 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.445227 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.547744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.547785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.547796 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.547812 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.547823 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.590861 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:36:08.591067 +0000 UTC Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.593444 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:02 crc kubenswrapper[5001]: E0128 17:17:02.593639 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.650517 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.650564 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.650575 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.650589 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.650601 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.753169 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.753211 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.753223 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.753242 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.753252 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.856077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.856116 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.856128 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.856165 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.856177 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.959058 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.959101 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.959112 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.959128 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:02 crc kubenswrapper[5001]: I0128 17:17:02.959140 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:02Z","lastTransitionTime":"2026-01-28T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.061580 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.061620 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.061631 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.061645 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.061656 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.166838 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.166880 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.166891 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.166906 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.166918 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.268840 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.268879 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.268890 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.268905 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.268917 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.371257 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.371294 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.371303 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.371317 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.371327 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.474265 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.474302 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.474314 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.474328 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.474338 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.576807 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.576855 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.576867 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.576886 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.576897 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.591410 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:44:15.056309127 +0000 UTC Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.593675 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.593732 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:03 crc kubenswrapper[5001]: E0128 17:17:03.593777 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.593736 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:03 crc kubenswrapper[5001]: E0128 17:17:03.594091 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:03 crc kubenswrapper[5001]: E0128 17:17:03.594167 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.679025 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.679103 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.679127 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.679156 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.679182 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.782090 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.782130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.782141 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.782156 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.782167 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.884727 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.884764 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.884775 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.884790 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.884802 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.986668 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.986707 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.986717 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.986731 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:03 crc kubenswrapper[5001]: I0128 17:17:03.986740 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:03Z","lastTransitionTime":"2026-01-28T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.088673 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.088720 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.088733 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.088750 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.088761 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.190698 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.190742 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.190751 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.190765 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.190776 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.293418 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.293459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.293468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.293484 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.293493 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.395205 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.395245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.395255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.395269 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.395278 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.497308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.497358 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.497369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.497389 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.497399 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.591661 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:56:21.570080531 +0000 UTC Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.593918 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.594228 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.600109 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.600149 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.600160 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.600179 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.600190 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.604715 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.608144 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.621265 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.631100 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.648526 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.654317 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.654353 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.654365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.654381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.654392 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.662163 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.666085 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.669747 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.669785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.669798 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.669814 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.669825 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.674471 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.680548 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.683576 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.683618 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.683633 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.683651 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.683670 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.686616 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.694350 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.698533 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.698602 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.698616 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.698630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.698642 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.701122 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.711111 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.711196 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.714812 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.714844 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.714855 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.714872 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.714883 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.724018 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.726526 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: E0128 17:17:04.726641 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.730835 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.730901 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.730911 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.730924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.730933 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.733580 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.747507 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.759258 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.770000 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.778540 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.788438 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.799485 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:04Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.833458 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.833485 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.833519 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.833533 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.833543 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.936013 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.936050 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.936061 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.936080 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:04 crc kubenswrapper[5001]: I0128 17:17:04.936099 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:04Z","lastTransitionTime":"2026-01-28T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.039227 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.039320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.039335 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.039355 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.039371 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.142038 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.142086 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.142097 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.142138 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.142149 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.243871 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.243907 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.243918 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.243932 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.243941 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.347328 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.347356 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.347364 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.347379 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.347387 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.450236 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.450285 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.450295 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.450310 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.450322 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.553088 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.553134 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.553145 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.553162 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.553175 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.592650 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:44:09.53296024 +0000 UTC Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.593872 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.593923 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.593874 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:05 crc kubenswrapper[5001]: E0128 17:17:05.594005 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:05 crc kubenswrapper[5001]: E0128 17:17:05.594047 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:05 crc kubenswrapper[5001]: E0128 17:17:05.594129 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.659711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.659753 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.659765 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.659783 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.659799 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.762539 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.762591 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.762603 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.762621 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.762632 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.865339 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.865384 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.865393 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.865408 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.865420 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.967450 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.967492 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.967503 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.967519 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:05 crc kubenswrapper[5001]: I0128 17:17:05.967531 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:05Z","lastTransitionTime":"2026-01-28T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.073941 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.073988 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.074000 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.074014 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.074023 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.176331 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.176361 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.176370 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.176387 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.176397 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.278471 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.278508 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.278515 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.278534 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.278543 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.380459 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.380501 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.380513 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.380528 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.380550 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.483316 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.483379 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.483391 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.483409 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.483421 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.488681 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:06 crc kubenswrapper[5001]: E0128 17:17:06.488831 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:17:06 crc kubenswrapper[5001]: E0128 17:17:06.488876 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:17:38.488862552 +0000 UTC m=+104.656650782 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.586067 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.586125 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.586136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.586154 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.586165 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.593613 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:58:26.671761564 +0000 UTC Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.593792 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:06 crc kubenswrapper[5001]: E0128 17:17:06.593994 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.688954 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.689025 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.689037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.689053 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.689064 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.791884 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.792012 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.792029 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.792050 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.792097 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.894099 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.894147 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.894160 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.894181 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.894194 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.996887 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.996924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.996934 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.996949 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:06 crc kubenswrapper[5001]: I0128 17:17:06.996961 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:06Z","lastTransitionTime":"2026-01-28T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.099511 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.099542 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.099550 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.099564 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.099576 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.202442 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.202519 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.202543 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.202572 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.202612 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.305051 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.305092 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.305101 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.305115 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.305127 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.407972 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.408018 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.408029 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.408045 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.408056 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.510704 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.510746 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.510755 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.510770 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.510780 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.593702 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.593744 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:05:27.16651317 +0000 UTC Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.593771 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:07 crc kubenswrapper[5001]: E0128 17:17:07.593820 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.593678 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:07 crc kubenswrapper[5001]: E0128 17:17:07.593911 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:07 crc kubenswrapper[5001]: E0128 17:17:07.594043 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.612733 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.612775 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.612785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.612803 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.612813 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.715087 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.715124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.715133 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.715146 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.715155 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.818238 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.818290 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.818303 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.818321 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.818333 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.920367 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.920413 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.920425 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.920440 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:07 crc kubenswrapper[5001]: I0128 17:17:07.920453 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:07Z","lastTransitionTime":"2026-01-28T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.023066 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.023117 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.023129 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.023144 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.023155 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.125467 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.125514 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.125530 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.125545 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.125554 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.227653 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.227712 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.227729 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.227744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.227755 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.330100 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.330141 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.330153 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.330168 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.330180 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.433093 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.433124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.433134 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.433146 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.433158 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.536050 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.536127 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.536143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.536161 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.536171 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.593505 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:08 crc kubenswrapper[5001]: E0128 17:17:08.593690 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.594228 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:04:49.782830201 +0000 UTC Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.638276 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.638303 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.638312 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.638324 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.638334 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.741260 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.741297 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.741308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.741324 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.741335 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.845189 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.845222 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.845234 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.845255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.845269 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.948590 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.948629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.948637 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.948651 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:08 crc kubenswrapper[5001]: I0128 17:17:08.948659 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:08Z","lastTransitionTime":"2026-01-28T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.051194 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.051254 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.051272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.051294 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.051311 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.155723 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.155768 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.155784 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.155806 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.155823 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.257941 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.258042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.258059 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.258081 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.258096 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.360096 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.360135 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.360147 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.360162 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.360174 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.462837 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.462871 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.462883 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.462899 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.462910 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.564935 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.564962 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.564986 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.564999 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.565008 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.594197 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:09 crc kubenswrapper[5001]: E0128 17:17:09.594332 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.594376 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:45:23.611895326 +0000 UTC Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.594454 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:09 crc kubenswrapper[5001]: E0128 17:17:09.594511 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.594554 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:09 crc kubenswrapper[5001]: E0128 17:17:09.594609 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.667062 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.667095 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.667106 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.667124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.667134 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.769910 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.769952 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.769961 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.770008 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.770028 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.872226 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.872280 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.872295 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.872318 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.872334 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.975358 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.975389 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.975397 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.975410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:09 crc kubenswrapper[5001]: I0128 17:17:09.975418 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:09Z","lastTransitionTime":"2026-01-28T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.077453 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.077508 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.077530 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.077545 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.077555 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.180178 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.180223 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.180235 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.180255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.180267 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.282681 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.282723 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.282735 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.282753 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.282765 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.385315 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.385364 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.385380 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.385401 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.385417 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.488392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.488449 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.488463 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.488479 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.488489 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.592344 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.592396 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.592427 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.592446 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.592458 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.593846 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:10 crc kubenswrapper[5001]: E0128 17:17:10.593944 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.594758 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:34:40.216630524 +0000 UTC Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.694827 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.694862 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.694871 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.694885 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.694897 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.797570 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.797606 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.797616 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.797631 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.797642 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.900178 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.900229 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.900245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.900265 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.900281 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:10Z","lastTransitionTime":"2026-01-28T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.987665 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/0.log" Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.987718 5001 generic.go:334] "Generic (PLEG): container finished" podID="3cd579b1-57ae-4f44-85b5-53b6c746078b" containerID="6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a" exitCode=1 Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.987758 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerDied","Data":"6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a"} Jan 28 17:17:10 crc kubenswrapper[5001]: I0128 17:17:10.988195 5001 scope.go:117] "RemoveContainer" containerID="6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.002471 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.002516 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.002528 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.002549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.002562 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.008152 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.019239 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.032406 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"2026-01-28T17:16:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f\\\\n2026-01-28T17:16:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f to /host/opt/cni/bin/\\\\n2026-01-28T17:16:25Z [verbose] multus-daemon started\\\\n2026-01-28T17:16:25Z [verbose] Readiness Indicator file check\\\\n2026-01-28T17:17:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.045297 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.056836 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.071066 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.081839 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.094055 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.105069 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.107361 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.107408 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.107426 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.107478 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.108910 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c43652-e8ce-4516-a1f0-90e12bcc0c84\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.121729 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.133401 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.144787 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.154778 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.167355 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.179116 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.191753 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.203186 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.209477 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.209718 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.209789 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.209857 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.209925 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.223057 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:11Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.312250 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.312292 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.312306 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.312325 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.312340 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.414216 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.414249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.414259 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.414274 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.414286 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.516836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.517153 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.517233 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.517306 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.517377 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.593459 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:11 crc kubenswrapper[5001]: E0128 17:17:11.593606 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.593459 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:11 crc kubenswrapper[5001]: E0128 17:17:11.593692 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.593871 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:11 crc kubenswrapper[5001]: E0128 17:17:11.594067 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.595537 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:00:11.590917795 +0000 UTC Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.619456 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.619506 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.619527 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.619545 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.619558 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.721358 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.721395 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.721408 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.721424 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.721434 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.823741 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.823797 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.823807 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.823823 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.823833 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.926192 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.926231 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.926238 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.926252 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.926261 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:11Z","lastTransitionTime":"2026-01-28T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.993003 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/0.log" Jan 28 17:17:11 crc kubenswrapper[5001]: I0128 17:17:11.993073 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerStarted","Data":"b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.008689 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c43652-e8ce-4516-a1f0-90e12bcc0c84\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.029523 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.029607 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.029630 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.029661 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.029687 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.036212 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.048785 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.058942 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.081069 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.096896 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.113038 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.125826 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.131735 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.131785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.131796 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.131813 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.131825 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.139865 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.154847 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.171746 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.185145 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.197573 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"2026-01-28T17:16:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f\\\\n2026-01-28T17:16:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f to /host/opt/cni/bin/\\\\n2026-01-28T17:16:25Z [verbose] multus-daemon started\\\\n2026-01-28T17:16:25Z [verbose] Readiness Indicator file check\\\\n2026-01-28T17:17:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.218219 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.229318 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.234282 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.234309 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.234318 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.234331 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.234340 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.241371 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.255113 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.266603 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:12Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.336286 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.336351 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.336360 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.336374 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.336382 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.438998 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.439037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.439048 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.439062 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.439094 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.540916 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.540959 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.540970 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.541012 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.541023 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.593781 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:12 crc kubenswrapper[5001]: E0128 17:17:12.593963 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.595694 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:19:52.380810399 +0000 UTC Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.643555 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.643591 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.643604 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.643621 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.643634 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.746168 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.746209 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.746217 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.746231 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.746240 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.848923 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.849011 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.849035 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.849066 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.849089 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.950989 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.951030 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.951042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.951058 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:12 crc kubenswrapper[5001]: I0128 17:17:12.951072 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:12Z","lastTransitionTime":"2026-01-28T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.053063 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.053098 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.053115 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.053134 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.053145 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.155600 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.155641 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.155653 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.155667 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.155678 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.257957 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.258010 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.258021 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.258035 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.258045 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.359919 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.359968 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.360014 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.360039 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.360058 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.463586 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.463623 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.463637 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.463655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.463669 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.566442 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.566494 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.566510 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.566529 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.566543 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.593234 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.593295 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.593232 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:13 crc kubenswrapper[5001]: E0128 17:17:13.593393 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:13 crc kubenswrapper[5001]: E0128 17:17:13.593567 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:13 crc kubenswrapper[5001]: E0128 17:17:13.593616 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.596420 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:46:50.920067443 +0000 UTC Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.669311 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.669369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.669384 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.669401 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.669416 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.772037 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.772078 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.772087 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.772101 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.772114 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.874969 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.875035 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.875051 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.875071 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.875086 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.978039 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.978095 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.978110 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.978134 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:13 crc kubenswrapper[5001]: I0128 17:17:13.978149 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:13Z","lastTransitionTime":"2026-01-28T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.080296 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.080346 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.080365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.080386 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.080413 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.183081 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.183124 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.183136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.183155 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.183169 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.286088 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.286138 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.286147 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.286164 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.286174 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.387919 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.388015 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.388031 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.388053 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.388069 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.490448 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.490773 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.490783 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.490798 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.490814 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.592892 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.592937 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.592948 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.592963 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.593119 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.592994 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.593481 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.593820 5001 scope.go:117] "RemoveContainer" containerID="2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.596745 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:09:37.841711301 +0000 UTC Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.606322 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c43652-e8ce-4516-a1f0-90e12bcc0c84\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.617476 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.638917 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.649653 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.660957 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.689576 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.695689 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.696026 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.696168 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.696563 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.696749 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.718754 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.737052 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.750648 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.768254 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.782423 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.793441 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.798823 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.798861 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.798871 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.798885 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.798897 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.807471 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"2026-01-28T17:16:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f\\\\n2026-01-28T17:16:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f to /host/opt/cni/bin/\\\\n2026-01-28T17:16:25Z [verbose] multus-daemon started\\\\n2026-01-28T17:16:25Z [verbose] Readiness Indicator file check\\\\n2026-01-28T17:17:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.822201 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.822231 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.822242 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.822257 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.822268 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.824250 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.835189 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.835528 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.838733 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.838758 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.838768 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.838781 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.838790 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.848375 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.852061 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.856954 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.856998 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.857006 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.857022 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.857031 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.861486 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.868658 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.871660 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.871694 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.871706 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.871723 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.871733 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.874553 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.882315 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.885579 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.885606 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.885613 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.885626 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.885634 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.897372 5001 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc15c8fd-f2c2-4ad5-902f-cce872e1953a\\\",\\\"systemUUID\\\":\\\"b592013b-7faa-4e90-8c4e-8a75265fa756\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:14Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:14 crc kubenswrapper[5001]: E0128 17:17:14.897519 5001 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.900682 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.900711 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.900721 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.900738 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:14 crc kubenswrapper[5001]: I0128 17:17:14.900749 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:14Z","lastTransitionTime":"2026-01-28T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.003085 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.003126 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.003134 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.003149 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.003160 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.010894 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/2.log" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.013773 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.014288 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.027274 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.039415 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"2026-01-28T17:16:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f\\\\n2026-01-28T17:16:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f to /host/opt/cni/bin/\\\\n2026-01-28T17:16:25Z [verbose] multus-daemon started\\\\n2026-01-28T17:16:25Z [verbose] Readiness Indicator file check\\\\n2026-01-28T17:17:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.052289 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.063062 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.077008 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.087659 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.098084 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.105296 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.105332 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.105340 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.105354 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.105364 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.114200 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.129041 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.144850 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.153804 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c43652-e8ce-4516-a1f0-90e12bcc0c84\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.166289 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.216331 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.216365 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.216376 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.216391 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.216402 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.226967 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.253955 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.269509 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.284115 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.304807 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.317898 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:15Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.318617 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.318707 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.318810 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.318907 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.319007 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.421012 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.421050 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.421123 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.421136 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.421145 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.523351 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.523418 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.523430 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.523446 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.523462 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.593302 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.593441 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.593592 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:15 crc kubenswrapper[5001]: E0128 17:17:15.593753 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:15 crc kubenswrapper[5001]: E0128 17:17:15.593590 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:15 crc kubenswrapper[5001]: E0128 17:17:15.594036 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.597326 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:56:43.245562645 +0000 UTC Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.625420 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.625467 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.625482 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.625498 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.625509 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.728047 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.728093 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.728109 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.728125 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.728138 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.829918 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.829960 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.829987 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.830004 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.830016 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.932561 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.932617 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.932629 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.932651 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:15 crc kubenswrapper[5001]: I0128 17:17:15.932666 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:15Z","lastTransitionTime":"2026-01-28T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.017668 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/3.log" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.018278 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/2.log" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.021061 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.021143 5001 scope.go:117] "RemoveContainer" containerID="2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.020878 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" exitCode=1 Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.022232 5001 scope.go:117] "RemoveContainer" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" Jan 28 17:17:16 crc kubenswrapper[5001]: E0128 17:17:16.022475 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.034945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.035007 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.035020 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.035036 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.035046 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.037673 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c43652-e8ce-4516-a1f0-90e12bcc0c84\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.050777 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.062643 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.074904 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.087185 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.100699 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.112943 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.127240 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.137051 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.137086 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.137098 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.137114 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.137125 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.140840 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.162722 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2628351e2920bafb2fd864d9d8b789c2fdc5156b34f0e82d3d54813a0c866974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:16:45Z\\\",\\\"message\\\":\\\"roller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0128 17:16:45.490606 6705 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0128 17:16:45.490611 6705 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0128 17:16:45.490615 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nF0128 17:16:45.490618 6705 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:15Z\\\",\\\"message\\\":\\\"r/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 17:17:15.645956 7135 services_controller.go:454] Service default/kubernetes for network=default has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI0128 17:17:15.646716 7135 services_controller.go:451] Built service openshift-authentication-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.150\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 17:17:15.646745 7135 services_controller.go:452] Built service openshift-authentication-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0128 17:17:15.646755 7135 services_controller.go:453] Built service openshift-authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.177266 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.191706 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.206713 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"2026-01-28T17:16:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f\\\\n2026-01-28T17:16:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f to /host/opt/cni/bin/\\\\n2026-01-28T17:16:25Z [verbose] multus-daemon started\\\\n2026-01-28T17:16:25Z [verbose] Readiness Indicator file check\\\\n2026-01-28T17:17:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.230732 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.239343 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.239381 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.239392 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.239406 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.239417 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.243241 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.254514 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.267512 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.280895 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:16Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.340990 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.341022 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.341030 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.341044 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.341053 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.444042 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.444103 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.444114 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.444132 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.444147 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.546285 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.546338 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.546350 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.546369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.546382 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.593312 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:16 crc kubenswrapper[5001]: E0128 17:17:16.593467 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.597655 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:45:12.95618131 +0000 UTC Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.649286 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.649320 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.649331 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.649347 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.649358 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.751854 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.751890 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.751900 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.751915 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.751924 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.855112 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.855189 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.855200 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.855217 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.855227 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.958016 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.958057 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.958069 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.958085 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:16 crc kubenswrapper[5001]: I0128 17:17:16.958096 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:16Z","lastTransitionTime":"2026-01-28T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.026299 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/3.log" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.029759 5001 scope.go:117] "RemoveContainer" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.030001 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.050376 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b1b9ddb-6773-4b38-beb0-07d93f29f1af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158e1964ce2846a24433db2cf8ebd2ce9ebddab1b21b778880071462d21b11b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd831b8d3f5dafd8aeaab842cfc6573f8ca96ff7400fb6d8bccc76a1a577baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6248b5379da6dafb866239ce1290a75079f93942f5c4583734b99296c53341b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21805045a93de9a5753f0ddca5b4c76864ce5f2908a0c9c9adf827e3a82f30fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aeafeacdb2af8ff7d7f0c0fea4fa5c94cfd31981f08a4edec2c6827b15c4ca6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f20e928debe836ecfc2065bd82f21ea7d68285d72cb194ac44cd38fa1fc3a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ec76fc2ee0347d93e0cc7335a1a12de87d07884d3d056344e199c91100adfc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btg7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dhcr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.060407 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzc7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9e939ff-2430-40ba-895c-51e6dc6561e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09867c504d8ee8f1df9c9db0b646815af989e1a980b9ce299e574f7f95b90a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cb5xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzc7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.060565 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.060602 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.060623 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.060640 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.060652 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.073200 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a62f06ca-6dcd-45eb-89c5-e284699a8ff8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"tension-apiserver-authentication::requestheader-client-ca-file\\\\nI0128 17:16:13.369465 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0128 17:16:13.369480 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0128 17:16:13.369743 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\"\\\\nI0128 17:16:13.369775 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-248247963/tls.crt::/tmp/serving-cert-248247963/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769620557\\\\\\\\\\\\\\\" (2026-01-28 17:15:57 +0000 UTC to 2026-02-27 17:15:58 +0000 UTC (now=2026-01-28 17:16:13.369742463 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.369960 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769620558\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769620558\\\\\\\\\\\\\\\" (2026-01-28 16:15:58 +0000 UTC to 2027-01-28 16:15:58 +0000 UTC (now=2026-01-28 17:16:13.369917568 +0000 UTC))\\\\\\\"\\\\nI0128 17:16:13.370019 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 17:16:13.370041 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 17:16:13.370054 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0128 17:16:13.372183 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372239 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0128 17:16:13.372260 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0128 17:16:13.373994 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.082211 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qz9lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"652f7f95-a748-4fd4-b323-19a93494ddc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4e17f7f6423f15099cc53cf46ae0d4a1cee8ca35222fefcbe553196d0f4aafc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfr44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qz9lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.092916 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7fgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd579b1-57ae-4f44-85b5-53b6c746078b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:10Z\\\",\\\"message\\\":\\\"2026-01-28T17:16:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f\\\\n2026-01-28T17:16:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6fd1356b-b8b7-46b0-a2d4-c88a0269b51f to /host/opt/cni/bin/\\\\n2026-01-28T17:16:25Z [verbose] multus-daemon started\\\\n2026-01-28T17:16:25Z [verbose] Readiness Indicator file check\\\\n2026-01-28T17:17:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88h9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7fgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.104512 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.114058 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da70482-1d5a-4149-95f7-0863485f6c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db1883c1bece7a8390fe115dc06b2497bb4dc91d36a057871df707126998b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00248ed3d4030b567d19fedab90823412db2393612e566785ee3532e8a6aee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcspr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q7lxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.121955 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rnn76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b5caa8d-b144-45a6-b334-e9e77c13064d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2sds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rnn76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.132204 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b15ba87a9ba465868d437aab74c96bb650934ec0b356621064c3cf3a7babb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.140377 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c43652-e8ce-4516-a1f0-90e12bcc0c84\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee29641e4ac4c2e6a8a436db66fd5bbcda2e8330425cb9af0244ae450ed6bdfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a79da78ffdb89d61c54a6d833239de75772e09af67b48d4325a028c5adc5190\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.150157 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.161425 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc93485b771e0efbfac86906d5026c20555f6ff512568bc4d11c9623ffe9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb7616952f36e18108dc567dc1d52c22933ce5791509350384c074a31047f0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.163425 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.163468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.163479 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.163494 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.163505 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.173377 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28f0035d80f790d7811b8bb254a608e85cead40b385a1df048ee5dff68204201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.183807 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de2d052-6f7c-4345-91fa-ba2fc7532251\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca1648f7de8820dab2a3b6cfc77370af088450084604424f8e9062ce2988a5db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thsl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mqgwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.201023 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324b03b5-a748-440b-b1ad-15022599b855\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T17:17:15Z\\\",\\\"message\\\":\\\"r/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0128 17:17:15.645956 7135 services_controller.go:454] Service default/kubernetes for network=default has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI0128 17:17:15.646716 7135 services_controller.go:451] Built service openshift-authentication-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.150\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 17:17:15.646745 7135 services_controller.go:452] Built service openshift-authentication-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0128 17:17:15.646755 7135 services_controller.go:453] Built service openshift-authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T17:17:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:16:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:16:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chwvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cnffr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.213106 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b56515-5c10-4201-a241-8caf15bbfda5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4768230ab00eb7119baf9da986102cb5de8eb34198383ed9e9f61997d615fb23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c933c43449ec5a77cdf70c6c88dbce58654f3532d3f7d2836b2bb67b482e4237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1fd4c2092e057a292d0b7c2e78a65bceb92ac8c7b42608270f48b13a65b915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.224852 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0931dfea-112d-4323-8e53-f98562b74038\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c04d38e9112e4823dc9124e21ecbbe0894c0c3030ff0e775cdc7b6190f5675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097e37d8901215be7f2c5daaffad3edce3037a44bf7a1553f5d79f4ad81f96b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98ba6ed0b4a9cca0a4018dde2f2956b68bfa15e42392bbb86cb345053e5ec48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324a0bfeca44aadf47d90d4b8317de00a420e6f3e307595a10401e38d5a8a02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T17:15:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T17:15:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:15:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.237569 5001 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T17:17:17Z is after 2025-08-24T17:21:41Z" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.266456 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.266667 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.266772 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.266848 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.266918 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.369744 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.369796 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.369813 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.369833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.369849 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.396440 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.396647 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.396618217 +0000 UTC m=+147.564406477 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.472457 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.472515 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.472537 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.472559 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.472575 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.498194 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.498242 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.498286 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.498313 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498402 5001 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498456 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.498439683 +0000 UTC m=+147.666227933 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498510 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498532 5001 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498544 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498628 5001 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498512 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498677 5001 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498687 5001 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498631 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.498612368 +0000 UTC m=+147.666400598 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498725 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.49871553 +0000 UTC m=+147.666503770 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.498739 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.498732501 +0000 UTC m=+147.666520741 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.575367 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.575410 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.575419 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.575434 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.575444 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.593717 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.593795 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.593741 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.593834 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.593916 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:17 crc kubenswrapper[5001]: E0128 17:17:17.594039 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.598762 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:26:30.274177916 +0000 UTC Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.677771 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.677810 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.677821 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.677836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.677846 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.780486 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.780526 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.780540 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.780557 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.780569 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.883862 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.883903 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.883921 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.883945 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.884005 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.986791 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.986826 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.986834 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.986848 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:17 crc kubenswrapper[5001]: I0128 17:17:17.986857 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:17Z","lastTransitionTime":"2026-01-28T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.089138 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.089178 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.089188 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.089203 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.089214 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.191377 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.191428 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.191438 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.191457 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.191469 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.293894 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.293948 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.293958 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.293971 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.294017 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.396204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.396249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.396258 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.396272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.396282 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.498227 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.498273 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.498285 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.498301 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.498311 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.593375 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:18 crc kubenswrapper[5001]: E0128 17:17:18.593518 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.599055 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:07:47.916214444 +0000 UTC Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.600191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.600227 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.600238 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.600254 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.600265 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.702177 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.702228 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.702238 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.702253 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.702264 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.805143 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.805197 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.805223 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.805248 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.805264 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.908511 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.908550 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.908560 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.908576 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:18 crc kubenswrapper[5001]: I0128 17:17:18.908587 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:18Z","lastTransitionTime":"2026-01-28T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.010946 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.011038 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.011073 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.011092 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.011107 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.113515 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.113553 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.113562 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.113576 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.113585 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.215379 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.215646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.215715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.215801 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.215890 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.318669 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.318706 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.318715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.318729 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.318740 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.421167 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.421215 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.421226 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.421248 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.421261 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.524204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.524249 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.524261 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.524279 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.524290 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.593373 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:19 crc kubenswrapper[5001]: E0128 17:17:19.593476 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.593392 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:19 crc kubenswrapper[5001]: E0128 17:17:19.593538 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.593378 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:19 crc kubenswrapper[5001]: E0128 17:17:19.593581 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.599501 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:06:03.269467955 +0000 UTC Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.626085 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.626130 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.626141 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.626156 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.626168 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.728166 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.728221 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.728232 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.728248 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.728261 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.831634 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.831675 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.831684 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.831696 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.831705 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.934556 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.934625 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.934638 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.934656 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:19 crc kubenswrapper[5001]: I0128 17:17:19.934671 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:19Z","lastTransitionTime":"2026-01-28T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.037360 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.037515 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.037534 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.037558 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.037575 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.140148 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.140196 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.140204 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.140217 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.140228 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.243760 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.243816 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.243840 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.243869 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.243890 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.346633 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.346674 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.346686 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.346702 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.346714 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.448300 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.448342 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.448353 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.448369 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.448381 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.551004 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.551049 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.551060 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.551077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.551090 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.593890 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:20 crc kubenswrapper[5001]: E0128 17:17:20.594195 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.600306 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:17:15.212928999 +0000 UTC Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.653924 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.653965 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.653998 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.654015 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.654026 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.756910 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.756986 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.756997 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.757013 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.757022 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.858862 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.858901 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.858915 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.858931 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.858943 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.961655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.961713 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.961734 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.961750 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:20 crc kubenswrapper[5001]: I0128 17:17:20.961761 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:20Z","lastTransitionTime":"2026-01-28T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.063557 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.063587 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.063595 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.063607 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.063617 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.165712 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.165746 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.165755 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.165769 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.165778 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.268829 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.268898 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.268914 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.268937 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.268956 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.370943 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.371012 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.371023 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.371039 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.371051 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.474477 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.474513 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.474523 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.474539 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.474551 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.576706 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.576740 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.576748 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.576764 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.576776 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.593154 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.593195 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.593235 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:21 crc kubenswrapper[5001]: E0128 17:17:21.593307 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:21 crc kubenswrapper[5001]: E0128 17:17:21.593414 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:21 crc kubenswrapper[5001]: E0128 17:17:21.593477 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.601382 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:16:50.220421557 +0000 UTC Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.678727 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.678757 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.678768 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.678783 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.678793 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.781258 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.781594 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.781697 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.781799 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.781909 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.884650 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.884702 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.884715 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.884731 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.884743 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.987048 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.987077 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.987104 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.987118 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:21 crc kubenswrapper[5001]: I0128 17:17:21.987127 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:21Z","lastTransitionTime":"2026-01-28T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.089200 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.089245 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.089255 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.089272 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.089281 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.191923 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.191957 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.192003 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.192054 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.192072 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.294931 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.294956 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.294964 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.295003 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.295015 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.396805 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.396833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.396841 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.396854 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.396864 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.498848 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.498881 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.498889 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.498901 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.498910 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.595315 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:22 crc kubenswrapper[5001]: E0128 17:17:22.595478 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.600902 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.601111 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.601202 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.601308 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.601405 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.601513 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:01:16.937927407 +0000 UTC Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.704131 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.704403 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.704471 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.704545 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.704605 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.807017 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.807072 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.807081 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.807099 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.807108 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.909533 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.909584 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.909628 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.909646 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:22 crc kubenswrapper[5001]: I0128 17:17:22.909658 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:22Z","lastTransitionTime":"2026-01-28T17:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.012181 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.012218 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.012226 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.012243 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.012254 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.115550 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.115615 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.115632 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.115655 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.115672 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.218294 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.218340 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.218354 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.218378 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.218396 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.321741 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.321803 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.321814 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.321833 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.321846 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.424529 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.424606 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.424621 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.424650 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.424668 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.527864 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.527904 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.527916 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.527931 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.527943 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.593237 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.593269 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:23 crc kubenswrapper[5001]: E0128 17:17:23.593508 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.593275 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:23 crc kubenswrapper[5001]: E0128 17:17:23.593391 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:23 crc kubenswrapper[5001]: E0128 17:17:23.593614 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.602128 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:12:47.315586726 +0000 UTC Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.630158 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.630215 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.630225 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.630242 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.630254 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.732852 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.732892 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.732902 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.732920 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.732931 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.835159 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.835191 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.835200 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.835214 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.835226 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.940185 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.940240 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.940261 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.940280 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:23 crc kubenswrapper[5001]: I0128 17:17:23.940292 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:23Z","lastTransitionTime":"2026-01-28T17:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.042468 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.042708 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.042718 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.042733 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.042743 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.144785 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.144836 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.144852 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.144901 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.144917 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.247147 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.247228 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.247263 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.247293 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.247320 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.349994 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.350032 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.350041 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.350056 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.350066 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.453525 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.453549 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.453557 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.453570 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.453581 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.556405 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.556442 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.556452 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.556465 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.556474 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.593187 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:24 crc kubenswrapper[5001]: E0128 17:17:24.593491 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.602849 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:34:02.505731326 +0000 UTC Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.607156 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.657027 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q7lxd" podStartSLOduration=64.65700072 podStartE2EDuration="1m4.65700072s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.644486055 +0000 UTC m=+90.812274285" watchObservedRunningTime="2026-01-28 17:17:24.65700072 +0000 UTC m=+90.824788960" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.661663 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.661700 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.661710 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.661727 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.661740 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.676826 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.676809914 podStartE2EDuration="20.676809914s" podCreationTimestamp="2026-01-28 17:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.676558468 +0000 UTC m=+90.844346708" watchObservedRunningTime="2026-01-28 17:17:24.676809914 +0000 UTC m=+90.844598144" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.729330 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podStartSLOduration=64.729305198 podStartE2EDuration="1m4.729305198s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.729065722 +0000 UTC m=+90.896853982" watchObservedRunningTime="2026-01-28 17:17:24.729305198 +0000 UTC m=+90.897093438" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.763773 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.763754184 podStartE2EDuration="1m11.763754184s" podCreationTimestamp="2026-01-28 17:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.763422435 +0000 UTC m=+90.931210675" watchObservedRunningTime="2026-01-28 17:17:24.763754184 +0000 UTC m=+90.931542414" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.764531 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.764569 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.764580 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.764800 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.764813 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.779397 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.77938171 podStartE2EDuration="36.77938171s" podCreationTimestamp="2026-01-28 17:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.77901498 +0000 UTC m=+90.946803240" watchObservedRunningTime="2026-01-28 17:17:24.77938171 +0000 UTC m=+90.947169940" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.795325 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7fgxj" podStartSLOduration=64.795303853 podStartE2EDuration="1m4.795303853s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.794246926 +0000 UTC m=+90.962035166" watchObservedRunningTime="2026-01-28 17:17:24.795303853 +0000 UTC m=+90.963092083" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.815613 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dhcr2" podStartSLOduration=64.815594491 podStartE2EDuration="1m4.815594491s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.815258862 +0000 UTC m=+90.983047102" watchObservedRunningTime="2026-01-28 17:17:24.815594491 +0000 UTC m=+90.983382721" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.853808 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.853794393 podStartE2EDuration="1m11.853794393s" podCreationTimestamp="2026-01-28 17:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.853186247 +0000 UTC m=+91.020974477" watchObservedRunningTime="2026-01-28 17:17:24.853794393 +0000 UTC m=+91.021582623" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.853958 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bzc7t" podStartSLOduration=64.853954177 podStartE2EDuration="1m4.853954177s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.83675519 +0000 UTC m=+91.004543420" watchObservedRunningTime="2026-01-28 17:17:24.853954177 +0000 UTC m=+91.021742407" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.865913 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qz9lj" podStartSLOduration=64.865892828 podStartE2EDuration="1m4.865892828s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:24.86521569 +0000 UTC m=+91.033003920" watchObservedRunningTime="2026-01-28 17:17:24.865892828 +0000 UTC m=+91.033681058" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.866887 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.866934 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.866947 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.866966 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.866992 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.969699 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.969746 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.969757 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.969773 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:24 crc kubenswrapper[5001]: I0128 17:17:24.969785 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:24Z","lastTransitionTime":"2026-01-28T17:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.023026 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.023064 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.023072 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.023086 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.023354 5001 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T17:17:25Z","lastTransitionTime":"2026-01-28T17:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.062540 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6"] Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.062963 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.065524 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.065604 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.065610 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.065656 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.105096 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.105078593 podStartE2EDuration="1.105078593s" podCreationTimestamp="2026-01-28 17:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:25.103584254 +0000 UTC m=+91.271372484" watchObservedRunningTime="2026-01-28 17:17:25.105078593 +0000 UTC m=+91.272866823" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.189823 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/82974834-2607-4257-90fe-7487726fef69-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.190040 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82974834-2607-4257-90fe-7487726fef69-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.190135 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/82974834-2607-4257-90fe-7487726fef69-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.190174 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82974834-2607-4257-90fe-7487726fef69-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.190192 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82974834-2607-4257-90fe-7487726fef69-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291543 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/82974834-2607-4257-90fe-7487726fef69-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291610 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/82974834-2607-4257-90fe-7487726fef69-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291641 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82974834-2607-4257-90fe-7487726fef69-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291672 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/82974834-2607-4257-90fe-7487726fef69-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291695 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82974834-2607-4257-90fe-7487726fef69-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291715 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82974834-2607-4257-90fe-7487726fef69-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.291739 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/82974834-2607-4257-90fe-7487726fef69-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.292591 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82974834-2607-4257-90fe-7487726fef69-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.299022 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82974834-2607-4257-90fe-7487726fef69-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.315244 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82974834-2607-4257-90fe-7487726fef69-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mcrc6\" (UID: \"82974834-2607-4257-90fe-7487726fef69\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.379692 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.593347 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.593419 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.593463 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:25 crc kubenswrapper[5001]: E0128 17:17:25.594075 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:25 crc kubenswrapper[5001]: E0128 17:17:25.593843 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:25 crc kubenswrapper[5001]: E0128 17:17:25.594143 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.603874 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:23:51.584295028 +0000 UTC Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.603958 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 17:17:25 crc kubenswrapper[5001]: I0128 17:17:25.613348 5001 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 17:17:26 crc kubenswrapper[5001]: I0128 17:17:26.082992 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" event={"ID":"82974834-2607-4257-90fe-7487726fef69","Type":"ContainerStarted","Data":"9708eb4d3532132ea1f9859aeb531a63109d06b0a5587ec077c35ce1d46de982"} Jan 28 17:17:26 crc kubenswrapper[5001]: I0128 17:17:26.083070 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" event={"ID":"82974834-2607-4257-90fe-7487726fef69","Type":"ContainerStarted","Data":"01b66bf225af8216024444500f8462bb99f9d57ce3c5f1a05b3f5211f2f7e427"} Jan 28 17:17:26 crc kubenswrapper[5001]: I0128 17:17:26.593352 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:26 crc kubenswrapper[5001]: E0128 17:17:26.593524 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:27 crc kubenswrapper[5001]: I0128 17:17:27.593760 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:27 crc kubenswrapper[5001]: I0128 17:17:27.593822 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:27 crc kubenswrapper[5001]: I0128 17:17:27.593772 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:27 crc kubenswrapper[5001]: E0128 17:17:27.593908 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:27 crc kubenswrapper[5001]: E0128 17:17:27.594004 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:27 crc kubenswrapper[5001]: E0128 17:17:27.594074 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:28 crc kubenswrapper[5001]: I0128 17:17:28.593277 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:28 crc kubenswrapper[5001]: E0128 17:17:28.593419 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:29 crc kubenswrapper[5001]: I0128 17:17:29.593178 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:29 crc kubenswrapper[5001]: I0128 17:17:29.593288 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:29 crc kubenswrapper[5001]: I0128 17:17:29.593336 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:29 crc kubenswrapper[5001]: E0128 17:17:29.593443 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:29 crc kubenswrapper[5001]: E0128 17:17:29.593532 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:29 crc kubenswrapper[5001]: E0128 17:17:29.594011 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:29 crc kubenswrapper[5001]: I0128 17:17:29.594255 5001 scope.go:117] "RemoveContainer" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" Jan 28 17:17:29 crc kubenswrapper[5001]: E0128 17:17:29.594406 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:17:30 crc kubenswrapper[5001]: I0128 17:17:30.594060 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:30 crc kubenswrapper[5001]: E0128 17:17:30.594761 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:31 crc kubenswrapper[5001]: I0128 17:17:31.593758 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:31 crc kubenswrapper[5001]: I0128 17:17:31.593758 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:31 crc kubenswrapper[5001]: I0128 17:17:31.594278 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:31 crc kubenswrapper[5001]: E0128 17:17:31.594413 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:31 crc kubenswrapper[5001]: E0128 17:17:31.594501 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:31 crc kubenswrapper[5001]: E0128 17:17:31.594803 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:32 crc kubenswrapper[5001]: I0128 17:17:32.593535 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:32 crc kubenswrapper[5001]: E0128 17:17:32.593747 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:33 crc kubenswrapper[5001]: I0128 17:17:33.593113 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:33 crc kubenswrapper[5001]: E0128 17:17:33.593241 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:33 crc kubenswrapper[5001]: I0128 17:17:33.593267 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:33 crc kubenswrapper[5001]: I0128 17:17:33.593317 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:33 crc kubenswrapper[5001]: E0128 17:17:33.593335 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:33 crc kubenswrapper[5001]: E0128 17:17:33.593454 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:34 crc kubenswrapper[5001]: I0128 17:17:34.593959 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:34 crc kubenswrapper[5001]: E0128 17:17:34.595273 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:35 crc kubenswrapper[5001]: I0128 17:17:35.594078 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:35 crc kubenswrapper[5001]: I0128 17:17:35.594105 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:35 crc kubenswrapper[5001]: I0128 17:17:35.594097 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:35 crc kubenswrapper[5001]: E0128 17:17:35.594238 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:35 crc kubenswrapper[5001]: E0128 17:17:35.594326 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:35 crc kubenswrapper[5001]: E0128 17:17:35.594414 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:36 crc kubenswrapper[5001]: I0128 17:17:36.593334 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:36 crc kubenswrapper[5001]: E0128 17:17:36.593505 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:37 crc kubenswrapper[5001]: I0128 17:17:37.593272 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:37 crc kubenswrapper[5001]: I0128 17:17:37.593340 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:37 crc kubenswrapper[5001]: I0128 17:17:37.593399 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:37 crc kubenswrapper[5001]: E0128 17:17:37.593531 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:37 crc kubenswrapper[5001]: E0128 17:17:37.593625 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:37 crc kubenswrapper[5001]: E0128 17:17:37.593726 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:38 crc kubenswrapper[5001]: I0128 17:17:38.533357 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:38 crc kubenswrapper[5001]: E0128 17:17:38.533521 5001 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:17:38 crc kubenswrapper[5001]: E0128 17:17:38.533583 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs podName:2b5caa8d-b144-45a6-b334-e9e77c13064d nodeName:}" failed. No retries permitted until 2026-01-28 17:18:42.533562715 +0000 UTC m=+168.701350945 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs") pod "network-metrics-daemon-rnn76" (UID: "2b5caa8d-b144-45a6-b334-e9e77c13064d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 17:17:38 crc kubenswrapper[5001]: I0128 17:17:38.596805 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:38 crc kubenswrapper[5001]: E0128 17:17:38.596944 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:39 crc kubenswrapper[5001]: I0128 17:17:39.593309 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:39 crc kubenswrapper[5001]: I0128 17:17:39.593385 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:39 crc kubenswrapper[5001]: I0128 17:17:39.593335 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:39 crc kubenswrapper[5001]: E0128 17:17:39.593456 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:39 crc kubenswrapper[5001]: E0128 17:17:39.593551 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:39 crc kubenswrapper[5001]: E0128 17:17:39.593704 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:40 crc kubenswrapper[5001]: I0128 17:17:40.593288 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:40 crc kubenswrapper[5001]: E0128 17:17:40.593398 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:41 crc kubenswrapper[5001]: I0128 17:17:41.594027 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:41 crc kubenswrapper[5001]: I0128 17:17:41.594034 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:41 crc kubenswrapper[5001]: E0128 17:17:41.594146 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:41 crc kubenswrapper[5001]: I0128 17:17:41.594191 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:41 crc kubenswrapper[5001]: E0128 17:17:41.594255 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:41 crc kubenswrapper[5001]: E0128 17:17:41.594323 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:42 crc kubenswrapper[5001]: I0128 17:17:42.594311 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:42 crc kubenswrapper[5001]: E0128 17:17:42.594570 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:42 crc kubenswrapper[5001]: I0128 17:17:42.596227 5001 scope.go:117] "RemoveContainer" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" Jan 28 17:17:42 crc kubenswrapper[5001]: E0128 17:17:42.596417 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cnffr_openshift-ovn-kubernetes(324b03b5-a748-440b-b1ad-15022599b855)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" Jan 28 17:17:43 crc kubenswrapper[5001]: I0128 17:17:43.593652 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:43 crc kubenswrapper[5001]: I0128 17:17:43.593661 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:43 crc kubenswrapper[5001]: E0128 17:17:43.593867 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:43 crc kubenswrapper[5001]: I0128 17:17:43.593670 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:43 crc kubenswrapper[5001]: E0128 17:17:43.593936 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:43 crc kubenswrapper[5001]: E0128 17:17:43.594207 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:44 crc kubenswrapper[5001]: I0128 17:17:44.593313 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:44 crc kubenswrapper[5001]: E0128 17:17:44.595557 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:45 crc kubenswrapper[5001]: I0128 17:17:45.593244 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:45 crc kubenswrapper[5001]: I0128 17:17:45.593313 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:45 crc kubenswrapper[5001]: I0128 17:17:45.593388 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:45 crc kubenswrapper[5001]: E0128 17:17:45.593527 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:45 crc kubenswrapper[5001]: E0128 17:17:45.593586 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:45 crc kubenswrapper[5001]: E0128 17:17:45.593660 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:46 crc kubenswrapper[5001]: I0128 17:17:46.593645 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:46 crc kubenswrapper[5001]: E0128 17:17:46.593839 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:47 crc kubenswrapper[5001]: I0128 17:17:47.593530 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:47 crc kubenswrapper[5001]: I0128 17:17:47.593635 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:47 crc kubenswrapper[5001]: I0128 17:17:47.593721 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:47 crc kubenswrapper[5001]: E0128 17:17:47.593731 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:47 crc kubenswrapper[5001]: E0128 17:17:47.593819 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:47 crc kubenswrapper[5001]: E0128 17:17:47.593700 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:48 crc kubenswrapper[5001]: I0128 17:17:48.593408 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:48 crc kubenswrapper[5001]: E0128 17:17:48.593538 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:49 crc kubenswrapper[5001]: I0128 17:17:49.593060 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:49 crc kubenswrapper[5001]: I0128 17:17:49.593113 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:49 crc kubenswrapper[5001]: I0128 17:17:49.593068 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:49 crc kubenswrapper[5001]: E0128 17:17:49.593245 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:49 crc kubenswrapper[5001]: E0128 17:17:49.593345 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:49 crc kubenswrapper[5001]: E0128 17:17:49.593445 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:50 crc kubenswrapper[5001]: I0128 17:17:50.593797 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:50 crc kubenswrapper[5001]: E0128 17:17:50.593963 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:51 crc kubenswrapper[5001]: I0128 17:17:51.593686 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:51 crc kubenswrapper[5001]: I0128 17:17:51.593710 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:51 crc kubenswrapper[5001]: I0128 17:17:51.593827 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:51 crc kubenswrapper[5001]: E0128 17:17:51.593936 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:51 crc kubenswrapper[5001]: E0128 17:17:51.594117 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:51 crc kubenswrapper[5001]: E0128 17:17:51.594200 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:52 crc kubenswrapper[5001]: I0128 17:17:52.593847 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:52 crc kubenswrapper[5001]: E0128 17:17:52.593966 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:53 crc kubenswrapper[5001]: I0128 17:17:53.593877 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:53 crc kubenswrapper[5001]: I0128 17:17:53.593920 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:53 crc kubenswrapper[5001]: I0128 17:17:53.593873 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:53 crc kubenswrapper[5001]: E0128 17:17:53.594009 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:53 crc kubenswrapper[5001]: E0128 17:17:53.594278 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:53 crc kubenswrapper[5001]: E0128 17:17:53.594406 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:54 crc kubenswrapper[5001]: E0128 17:17:54.552046 5001 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 17:17:54 crc kubenswrapper[5001]: I0128 17:17:54.596037 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:54 crc kubenswrapper[5001]: E0128 17:17:54.596856 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:54 crc kubenswrapper[5001]: E0128 17:17:54.698016 5001 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:17:55 crc kubenswrapper[5001]: I0128 17:17:55.593847 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:55 crc kubenswrapper[5001]: I0128 17:17:55.593941 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:55 crc kubenswrapper[5001]: I0128 17:17:55.593847 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:55 crc kubenswrapper[5001]: E0128 17:17:55.593998 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:55 crc kubenswrapper[5001]: E0128 17:17:55.594113 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:55 crc kubenswrapper[5001]: E0128 17:17:55.594179 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:56 crc kubenswrapper[5001]: I0128 17:17:56.593688 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:56 crc kubenswrapper[5001]: E0128 17:17:56.593847 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.177642 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/1.log" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.178250 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/0.log" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.178287 5001 generic.go:334] "Generic (PLEG): container finished" podID="3cd579b1-57ae-4f44-85b5-53b6c746078b" containerID="b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7" exitCode=1 Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.178326 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerDied","Data":"b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7"} Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.178389 5001 scope.go:117] "RemoveContainer" containerID="6c30e3609d8d67b161661154f90e65e5de6aa090dbc252c0237f8fdbb87e0a4a" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.178831 5001 scope.go:117] "RemoveContainer" containerID="b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7" Jan 28 17:17:57 crc kubenswrapper[5001]: E0128 17:17:57.179024 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7fgxj_openshift-multus(3cd579b1-57ae-4f44-85b5-53b6c746078b)\"" pod="openshift-multus/multus-7fgxj" podUID="3cd579b1-57ae-4f44-85b5-53b6c746078b" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.195913 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mcrc6" podStartSLOduration=97.195894864 podStartE2EDuration="1m37.195894864s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:26.098780563 +0000 UTC m=+92.266568793" watchObservedRunningTime="2026-01-28 17:17:57.195894864 +0000 UTC m=+123.363683104" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.593302 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:57 crc kubenswrapper[5001]: E0128 17:17:57.593486 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.593538 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:57 crc kubenswrapper[5001]: E0128 17:17:57.593661 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.594213 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:57 crc kubenswrapper[5001]: E0128 17:17:57.594452 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:57 crc kubenswrapper[5001]: I0128 17:17:57.594518 5001 scope.go:117] "RemoveContainer" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.182415 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/1.log" Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.184497 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/3.log" Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.227700 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerStarted","Data":"2784ac440cc205327f98403767113fab2703083c63ce4cbe2fd5e230fe576b6a"} Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.228183 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.372160 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podStartSLOduration=98.372138787 podStartE2EDuration="1m38.372138787s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:17:58.250497497 +0000 UTC m=+124.418285747" watchObservedRunningTime="2026-01-28 17:17:58.372138787 +0000 UTC m=+124.539927017" Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.373193 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rnn76"] Jan 28 17:17:58 crc kubenswrapper[5001]: I0128 17:17:58.374193 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:17:58 crc kubenswrapper[5001]: E0128 17:17:58.374469 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:17:59 crc kubenswrapper[5001]: I0128 17:17:59.594275 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:17:59 crc kubenswrapper[5001]: I0128 17:17:59.594332 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:17:59 crc kubenswrapper[5001]: E0128 17:17:59.594523 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:17:59 crc kubenswrapper[5001]: I0128 17:17:59.594275 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:17:59 crc kubenswrapper[5001]: E0128 17:17:59.594650 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:17:59 crc kubenswrapper[5001]: E0128 17:17:59.594892 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:17:59 crc kubenswrapper[5001]: E0128 17:17:59.699201 5001 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:18:00 crc kubenswrapper[5001]: I0128 17:18:00.594067 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:00 crc kubenswrapper[5001]: E0128 17:18:00.594223 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:01 crc kubenswrapper[5001]: I0128 17:18:01.593458 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:01 crc kubenswrapper[5001]: I0128 17:18:01.593524 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:01 crc kubenswrapper[5001]: I0128 17:18:01.593572 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:01 crc kubenswrapper[5001]: E0128 17:18:01.593627 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:01 crc kubenswrapper[5001]: E0128 17:18:01.593716 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:01 crc kubenswrapper[5001]: E0128 17:18:01.593816 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:02 crc kubenswrapper[5001]: I0128 17:18:02.594102 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:02 crc kubenswrapper[5001]: E0128 17:18:02.594336 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:03 crc kubenswrapper[5001]: I0128 17:18:03.593227 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:03 crc kubenswrapper[5001]: I0128 17:18:03.593264 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:03 crc kubenswrapper[5001]: I0128 17:18:03.593346 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:03 crc kubenswrapper[5001]: E0128 17:18:03.593477 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:03 crc kubenswrapper[5001]: E0128 17:18:03.593555 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:03 crc kubenswrapper[5001]: E0128 17:18:03.593684 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:04 crc kubenswrapper[5001]: I0128 17:18:04.593136 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:04 crc kubenswrapper[5001]: E0128 17:18:04.594318 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:04 crc kubenswrapper[5001]: E0128 17:18:04.699725 5001 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:18:05 crc kubenswrapper[5001]: I0128 17:18:05.593864 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:05 crc kubenswrapper[5001]: E0128 17:18:05.594233 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:05 crc kubenswrapper[5001]: I0128 17:18:05.594009 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:05 crc kubenswrapper[5001]: E0128 17:18:05.594299 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:05 crc kubenswrapper[5001]: I0128 17:18:05.593881 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:05 crc kubenswrapper[5001]: E0128 17:18:05.594346 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:06 crc kubenswrapper[5001]: I0128 17:18:06.594101 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:06 crc kubenswrapper[5001]: E0128 17:18:06.594245 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:07 crc kubenswrapper[5001]: I0128 17:18:07.593303 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:07 crc kubenswrapper[5001]: I0128 17:18:07.593331 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:07 crc kubenswrapper[5001]: I0128 17:18:07.593334 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:07 crc kubenswrapper[5001]: E0128 17:18:07.593434 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:07 crc kubenswrapper[5001]: E0128 17:18:07.593553 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:07 crc kubenswrapper[5001]: E0128 17:18:07.593644 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:08 crc kubenswrapper[5001]: I0128 17:18:08.593582 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:08 crc kubenswrapper[5001]: E0128 17:18:08.593774 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:09 crc kubenswrapper[5001]: I0128 17:18:09.593863 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:09 crc kubenswrapper[5001]: I0128 17:18:09.593935 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:09 crc kubenswrapper[5001]: E0128 17:18:09.594038 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:09 crc kubenswrapper[5001]: I0128 17:18:09.593942 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:09 crc kubenswrapper[5001]: E0128 17:18:09.594111 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:09 crc kubenswrapper[5001]: E0128 17:18:09.594186 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:09 crc kubenswrapper[5001]: E0128 17:18:09.700944 5001 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:18:10 crc kubenswrapper[5001]: I0128 17:18:10.594135 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:10 crc kubenswrapper[5001]: I0128 17:18:10.594357 5001 scope.go:117] "RemoveContainer" containerID="b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7" Jan 28 17:18:10 crc kubenswrapper[5001]: E0128 17:18:10.594381 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:11 crc kubenswrapper[5001]: I0128 17:18:11.268431 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/1.log" Jan 28 17:18:11 crc kubenswrapper[5001]: I0128 17:18:11.268494 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerStarted","Data":"ce3a9bb5672656f9b7c84139662947a20d0c4248d56a0a1cc3fa2790eed2cabf"} Jan 28 17:18:11 crc kubenswrapper[5001]: I0128 17:18:11.593159 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:11 crc kubenswrapper[5001]: I0128 17:18:11.593267 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:11 crc kubenswrapper[5001]: I0128 17:18:11.593286 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:11 crc kubenswrapper[5001]: E0128 17:18:11.593461 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:11 crc kubenswrapper[5001]: E0128 17:18:11.593602 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:11 crc kubenswrapper[5001]: E0128 17:18:11.593676 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:12 crc kubenswrapper[5001]: I0128 17:18:12.593566 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:12 crc kubenswrapper[5001]: E0128 17:18:12.593710 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:13 crc kubenswrapper[5001]: I0128 17:18:13.594052 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:13 crc kubenswrapper[5001]: I0128 17:18:13.594102 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:13 crc kubenswrapper[5001]: E0128 17:18:13.594183 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 17:18:13 crc kubenswrapper[5001]: I0128 17:18:13.594052 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:13 crc kubenswrapper[5001]: E0128 17:18:13.594542 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 17:18:13 crc kubenswrapper[5001]: E0128 17:18:13.594605 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 17:18:14 crc kubenswrapper[5001]: I0128 17:18:14.593256 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:14 crc kubenswrapper[5001]: E0128 17:18:14.594421 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rnn76" podUID="2b5caa8d-b144-45a6-b334-e9e77c13064d" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.593840 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.593861 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.593916 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.595939 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.596389 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.597587 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 17:18:15 crc kubenswrapper[5001]: I0128 17:18:15.597704 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.267196 5001 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.305245 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.305692 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhtpl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.305967 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.306068 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.308770 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.309106 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.309300 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jk8v9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.309620 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.309627 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.309883 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.310285 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.310715 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.310880 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.311022 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.311259 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.312281 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.312883 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.312931 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9zxnt"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.313278 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.314360 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dkz7l"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.317213 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.317933 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.318261 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.318746 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.318871 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319202 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319382 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319679 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319708 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319832 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319879 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319906 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.319995 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.320167 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.320314 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.320419 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.320625 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.323260 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.324088 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.324448 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.325019 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kfhth"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338047 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338083 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338211 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338348 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338385 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338477 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338488 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.338550 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.339063 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.339539 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.339954 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.340333 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.341453 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.342469 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.342639 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.342923 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.343156 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.343324 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.343422 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.343586 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.343673 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.343753 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.344146 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4xbj9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.344606 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.345005 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76gt6"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.345617 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.346671 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tv6fl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.347074 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.348869 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.352155 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.365798 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.388003 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.388221 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.388340 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.392676 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.411622 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.427640 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.437167 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438499 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438543 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-policies\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438658 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-serving-cert\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438703 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-etcd-serving-ca\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438781 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438839 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44376569-5c2b-4bb3-9153-aa4c088e7b0c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.438859 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439145 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca322b78-934b-4119-a0f6-8037e473a1f9-config\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439175 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-service-ca\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439192 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688d529a-47a0-40ba-86db-6ae47a10f578-config\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439210 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/688d529a-47a0-40ba-86db-6ae47a10f578-trusted-ca\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439234 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-encryption-config\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439312 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439387 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwkq\" (UniqueName: \"kubernetes.io/projected/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-kube-api-access-nwwkq\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439409 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpc65\" (UniqueName: \"kubernetes.io/projected/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-kube-api-access-qpc65\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439429 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07db7890-6e55-4dd2-988b-084d8e060c7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439450 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/871aed01-fb32-4ff6-ab22-a59051b53d69-available-featuregates\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439506 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbs5\" (UniqueName: \"kubernetes.io/projected/344f2318-5424-4d56-9979-747c89ff11ad-kube-api-access-prbs5\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439534 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439559 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3d8c9-d6a5-42b0-8620-157672e4090f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439583 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89zl\" (UniqueName: \"kubernetes.io/projected/ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe-kube-api-access-q89zl\") pod \"cluster-samples-operator-665b6dd947-l4jqg\" (UID: \"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439618 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-machine-approver-tls\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439657 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwg6\" (UniqueName: \"kubernetes.io/projected/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-kube-api-access-6zwg6\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439680 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnws\" (UniqueName: \"kubernetes.io/projected/07db7890-6e55-4dd2-988b-084d8e060c7b-kube-api-access-clnws\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439713 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcbz\" (UniqueName: \"kubernetes.io/projected/c14628eb-e612-43d2-b299-88ebb92f22a0-kube-api-access-grcbz\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439782 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-serving-cert\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439824 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-dir\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439845 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439865 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-client-ca\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439886 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca322b78-934b-4119-a0f6-8037e473a1f9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439926 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflrm\" (UniqueName: \"kubernetes.io/projected/871aed01-fb32-4ff6-ab22-a59051b53d69-kube-api-access-bflrm\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439953 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-etcd-client\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.439994 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440035 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440060 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440149 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440196 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92d5\" (UniqueName: \"kubernetes.io/projected/d0b3d8c9-d6a5-42b0-8620-157672e4090f-kube-api-access-q92d5\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440229 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-image-import-ca\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440254 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l4jqg\" (UID: \"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440282 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b3d8c9-d6a5-42b0-8620-157672e4090f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440310 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-client-ca\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440338 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ca322b78-934b-4119-a0f6-8037e473a1f9-images\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440389 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344f2318-5424-4d56-9979-747c89ff11ad-serving-cert\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440412 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440434 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-config\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440458 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-audit-policies\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440478 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c14628eb-e612-43d2-b299-88ebb92f22a0-audit-dir\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440535 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440575 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95832a2d-7e40-4e03-a731-2c8ed45384b4-audit-dir\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440601 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44376569-5c2b-4bb3-9153-aa4c088e7b0c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440650 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-etcd-client\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440669 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07db7890-6e55-4dd2-988b-084d8e060c7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440689 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjh6j\" (UniqueName: \"kubernetes.io/projected/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-kube-api-access-gjh6j\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440716 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-config\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440739 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-config\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440776 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/871aed01-fb32-4ff6-ab22-a59051b53d69-serving-cert\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440800 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440825 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-audit\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440845 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-encryption-config\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440882 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440904 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.440958 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441007 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2dk\" (UniqueName: \"kubernetes.io/projected/881bc101-23c7-42c2-b4b9-b9983d9d4b1c-kube-api-access-cr2dk\") pod \"downloads-7954f5f757-9zxnt\" (UID: \"881bc101-23c7-42c2-b4b9-b9983d9d4b1c\") " pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441033 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-serving-cert\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441062 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-auth-proxy-config\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441080 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-trusted-ca-bundle\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441099 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5v5\" (UniqueName: \"kubernetes.io/projected/44f484a0-a976-4e56-82d9-84d8953664db-kube-api-access-xg5v5\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441117 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-config\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441138 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-oauth-serving-cert\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441157 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-config\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441176 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kkwt\" (UniqueName: \"kubernetes.io/projected/95832a2d-7e40-4e03-a731-2c8ed45384b4-kube-api-access-7kkwt\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441194 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzpm\" (UniqueName: \"kubernetes.io/projected/44376569-5c2b-4bb3-9153-aa4c088e7b0c-kube-api-access-rzzpm\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441216 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-config\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441249 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441268 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-oauth-config\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441285 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95832a2d-7e40-4e03-a731-2c8ed45384b4-node-pullsecrets\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441350 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44376569-5c2b-4bb3-9153-aa4c088e7b0c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441417 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmwt\" (UniqueName: \"kubernetes.io/projected/ca322b78-934b-4119-a0f6-8037e473a1f9-kube-api-access-rvmwt\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441445 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688d529a-47a0-40ba-86db-6ae47a10f578-serving-cert\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441554 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-serving-cert\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441579 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f484a0-a976-4e56-82d9-84d8953664db-serving-cert\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.441692 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5v6\" (UniqueName: \"kubernetes.io/projected/688d529a-47a0-40ba-86db-6ae47a10f578-kube-api-access-7c5v6\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.447340 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448069 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448223 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448234 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448351 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448068 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448544 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.448842 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449047 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449159 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449257 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449281 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449472 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449522 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.449904 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.450549 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.450802 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.451082 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.451256 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.451464 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.451665 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.453075 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.453586 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.453639 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.453750 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.453819 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.453875 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.454013 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.454103 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.454302 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.454388 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.454559 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.456159 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.456431 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.456540 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.458554 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.459078 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.459496 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.459845 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461228 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461436 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461538 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461565 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461615 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461698 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461778 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461906 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.462022 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.461296 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhtpl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.462170 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.462035 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.462304 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.482347 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.484453 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9zxnt"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.508419 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.529170 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.530842 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.531382 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.531609 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.532031 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.532586 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.532788 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.535827 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x5hd4"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.536324 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.541033 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.541165 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.541129 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542117 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2dk\" (UniqueName: \"kubernetes.io/projected/881bc101-23c7-42c2-b4b9-b9983d9d4b1c-kube-api-access-cr2dk\") pod \"downloads-7954f5f757-9zxnt\" (UID: \"881bc101-23c7-42c2-b4b9-b9983d9d4b1c\") " pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542144 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-auth-proxy-config\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542162 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-trusted-ca-bundle\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542178 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg5v5\" (UniqueName: \"kubernetes.io/projected/44f484a0-a976-4e56-82d9-84d8953664db-kube-api-access-xg5v5\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542196 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-config\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542211 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-serving-cert\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542226 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-oauth-serving-cert\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542241 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-config\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542255 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kkwt\" (UniqueName: \"kubernetes.io/projected/95832a2d-7e40-4e03-a731-2c8ed45384b4-kube-api-access-7kkwt\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542270 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzpm\" (UniqueName: \"kubernetes.io/projected/44376569-5c2b-4bb3-9153-aa4c088e7b0c-kube-api-access-rzzpm\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542294 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542310 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-oauth-config\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542324 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-config\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542339 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44376569-5c2b-4bb3-9153-aa4c088e7b0c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542352 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95832a2d-7e40-4e03-a731-2c8ed45384b4-node-pullsecrets\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542367 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvmwt\" (UniqueName: \"kubernetes.io/projected/ca322b78-934b-4119-a0f6-8037e473a1f9-kube-api-access-rvmwt\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542388 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-serving-cert\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542403 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f484a0-a976-4e56-82d9-84d8953664db-serving-cert\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542419 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688d529a-47a0-40ba-86db-6ae47a10f578-serving-cert\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542437 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c5v6\" (UniqueName: \"kubernetes.io/projected/688d529a-47a0-40ba-86db-6ae47a10f578-kube-api-access-7c5v6\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542452 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542469 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-serving-cert\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542484 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-policies\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542500 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542515 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44376569-5c2b-4bb3-9153-aa4c088e7b0c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542532 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-etcd-serving-ca\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542547 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542563 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca322b78-934b-4119-a0f6-8037e473a1f9-config\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542579 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-service-ca\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542593 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688d529a-47a0-40ba-86db-6ae47a10f578-config\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542607 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/688d529a-47a0-40ba-86db-6ae47a10f578-trusted-ca\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542627 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-encryption-config\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542648 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542670 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwwkq\" (UniqueName: \"kubernetes.io/projected/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-kube-api-access-nwwkq\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542704 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpc65\" (UniqueName: \"kubernetes.io/projected/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-kube-api-access-qpc65\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542725 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07db7890-6e55-4dd2-988b-084d8e060c7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542744 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbs5\" (UniqueName: \"kubernetes.io/projected/344f2318-5424-4d56-9979-747c89ff11ad-kube-api-access-prbs5\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542759 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542775 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/871aed01-fb32-4ff6-ab22-a59051b53d69-available-featuregates\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542790 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3d8c9-d6a5-42b0-8620-157672e4090f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542804 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q89zl\" (UniqueName: \"kubernetes.io/projected/ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe-kube-api-access-q89zl\") pod \"cluster-samples-operator-665b6dd947-l4jqg\" (UID: \"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542819 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-machine-approver-tls\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542833 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwg6\" (UniqueName: \"kubernetes.io/projected/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-kube-api-access-6zwg6\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542848 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnws\" (UniqueName: \"kubernetes.io/projected/07db7890-6e55-4dd2-988b-084d8e060c7b-kube-api-access-clnws\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542873 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grcbz\" (UniqueName: \"kubernetes.io/projected/c14628eb-e612-43d2-b299-88ebb92f22a0-kube-api-access-grcbz\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542933 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-dir\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542962 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.542991 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-client-ca\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543005 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-serving-cert\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543021 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca322b78-934b-4119-a0f6-8037e473a1f9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543040 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bflrm\" (UniqueName: \"kubernetes.io/projected/871aed01-fb32-4ff6-ab22-a59051b53d69-kube-api-access-bflrm\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543054 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-etcd-client\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543069 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543086 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543104 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543121 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543140 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92d5\" (UniqueName: \"kubernetes.io/projected/d0b3d8c9-d6a5-42b0-8620-157672e4090f-kube-api-access-q92d5\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543146 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543161 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-image-import-ca\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543208 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b3d8c9-d6a5-42b0-8620-157672e4090f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543229 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-client-ca\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543250 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l4jqg\" (UID: \"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543267 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ca322b78-934b-4119-a0f6-8037e473a1f9-images\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543286 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344f2318-5424-4d56-9979-747c89ff11ad-serving-cert\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543302 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543319 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-config\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543338 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-audit-policies\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543356 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c14628eb-e612-43d2-b299-88ebb92f22a0-audit-dir\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543374 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543393 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95832a2d-7e40-4e03-a731-2c8ed45384b4-audit-dir\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543408 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44376569-5c2b-4bb3-9153-aa4c088e7b0c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543426 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-etcd-client\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543441 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07db7890-6e55-4dd2-988b-084d8e060c7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543462 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-config\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543483 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjh6j\" (UniqueName: \"kubernetes.io/projected/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-kube-api-access-gjh6j\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543499 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-config\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543520 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/871aed01-fb32-4ff6-ab22-a59051b53d69-serving-cert\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543535 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543551 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-audit\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543568 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-encryption-config\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543587 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543651 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543684 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-trusted-ca-bundle\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543679 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.543932 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-image-import-ca\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.544292 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-config\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.544338 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95832a2d-7e40-4e03-a731-2c8ed45384b4-audit-dir\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.545017 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-auth-proxy-config\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.545092 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-dir\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.545483 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5ns7t"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.545860 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.546126 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wx758"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.546232 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07db7890-6e55-4dd2-988b-084d8e060c7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.546490 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.546508 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.546667 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.546871 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.547119 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-config\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.547609 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.547823 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.547915 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-config\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.547988 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.548144 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.548282 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.548428 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.549525 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-audit\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.550674 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-oauth-config\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551038 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-etcd-client\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551339 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-etcd-client\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551442 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551547 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551596 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07db7890-6e55-4dd2-988b-084d8e060c7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551606 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-serving-cert\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551600 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95832a2d-7e40-4e03-a731-2c8ed45384b4-node-pullsecrets\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551850 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.551939 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-encryption-config\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.552429 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.552483 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-client-ca\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.553032 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95832a2d-7e40-4e03-a731-2c8ed45384b4-serving-cert\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.553218 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.553243 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zkdv"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.553550 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.553822 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.553865 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-oauth-serving-cert\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.554449 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ca322b78-934b-4119-a0f6-8037e473a1f9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.554509 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-config\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.554590 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.554624 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.555469 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-client-ca\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.555742 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.556415 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-config\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.556508 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tv6fl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.556562 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.556839 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-audit-policies\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.556882 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c14628eb-e612-43d2-b299-88ebb92f22a0-audit-dir\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.557012 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-machine-approver-tls\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.557118 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.557275 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/871aed01-fb32-4ff6-ab22-a59051b53d69-available-featuregates\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.557554 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.557795 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3d8c9-d6a5-42b0-8620-157672e4090f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.557824 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.558087 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ca322b78-934b-4119-a0f6-8037e473a1f9-images\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.558088 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f484a0-a976-4e56-82d9-84d8953664db-serving-cert\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.558313 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b3d8c9-d6a5-42b0-8620-157672e4090f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.558839 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.559340 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-service-ca\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.559706 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.560384 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-serving-cert\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.560421 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c14628eb-e612-43d2-b299-88ebb92f22a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.560544 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.560927 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-etcd-serving-ca\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.561064 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688d529a-47a0-40ba-86db-6ae47a10f578-config\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.564679 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.567206 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-config\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.574605 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.577093 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.577341 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.579551 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.582893 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.582898 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.583248 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.584074 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.584523 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.586195 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.587233 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688d529a-47a0-40ba-86db-6ae47a10f578-serving-cert\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.587283 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.587525 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-policies\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.587540 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-encryption-config\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.589477 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/688d529a-47a0-40ba-86db-6ae47a10f578-trusted-ca\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.589564 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.589781 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14628eb-e612-43d2-b299-88ebb92f22a0-serving-cert\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.590576 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95832a2d-7e40-4e03-a731-2c8ed45384b4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.590875 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.591111 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.591345 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.591412 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/871aed01-fb32-4ff6-ab22-a59051b53d69-serving-cert\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.591891 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.592277 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca322b78-934b-4119-a0f6-8037e473a1f9-config\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.592320 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44376569-5c2b-4bb3-9153-aa4c088e7b0c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.592717 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/344f2318-5424-4d56-9979-747c89ff11ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.593006 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344f2318-5424-4d56-9979-747c89ff11ad-serving-cert\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.593926 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.595188 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.595938 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44376569-5c2b-4bb3-9153-aa4c088e7b0c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.596624 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l4jqg\" (UID: \"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.604790 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dww9c"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.604954 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.605519 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.605583 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.605941 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.606101 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xbj9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.609135 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.609774 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.610190 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gqvsz"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.610459 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.611183 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kfhth"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.612932 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8vjx"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.611248 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.613458 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.613818 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.615510 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.616139 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.617142 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w84qz"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.617935 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.619010 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jk8v9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.620286 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x5hd4"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.621698 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dkz7l"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.622672 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5ns7t"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.624714 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wx758"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.626230 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.627535 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.628651 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.629207 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76gt6"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.632536 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.632586 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.632599 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.636197 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.637531 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.638655 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xg6k9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.639247 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.640181 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.641677 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.642818 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8vjx"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644235 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12f37af3-c4d6-4cb6-a079-493192562bfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644264 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bb5bef45-2b4e-435c-aa48-799bb3421892-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644292 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjlz\" (UniqueName: \"kubernetes.io/projected/bc33c805-eeaf-40d2-977a-40c7fffc3b34-kube-api-access-7gjlz\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644322 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82eb98e0-1282-4f33-827b-4813c7399230-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644349 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644372 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdwm\" (UniqueName: \"kubernetes.io/projected/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-kube-api-access-ftdwm\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644388 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e8c775a-c533-4a4c-8346-a0a1e346e873-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644405 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-config\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644422 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qbk\" (UniqueName: \"kubernetes.io/projected/6a92c043-2e58-4a72-9ecb-024736e0ff21-kube-api-access-x6qbk\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644449 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82eb98e0-1282-4f33-827b-4813c7399230-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644463 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e8c775a-c533-4a4c-8346-a0a1e346e873-images\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644499 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644523 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cbcff487-3cc8-4e36-a9b9-edff4a99256f-signing-key\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644552 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2whh\" (UniqueName: \"kubernetes.io/projected/bb5bef45-2b4e-435c-aa48-799bb3421892-kube-api-access-c2whh\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644572 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c72b1ab-baa0-45ee-a130-ccebefc3d437-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jzg7q\" (UID: \"7c72b1ab-baa0-45ee-a130-ccebefc3d437\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644589 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a92c043-2e58-4a72-9ecb-024736e0ff21-config-volume\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644605 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f37af3-c4d6-4cb6-a079-493192562bfa-config\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644667 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxgjp\" (UniqueName: \"kubernetes.io/projected/2e8c775a-c533-4a4c-8346-a0a1e346e873-kube-api-access-mxgjp\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644687 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wqvx\" (UniqueName: \"kubernetes.io/projected/7c72b1ab-baa0-45ee-a130-ccebefc3d437-kube-api-access-4wqvx\") pod \"package-server-manager-789f6589d5-jzg7q\" (UID: \"7c72b1ab-baa0-45ee-a130-ccebefc3d437\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644731 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70839fe8-b107-4323-a8b6-824e154cd3d8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644756 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12f37af3-c4d6-4cb6-a079-493192562bfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644772 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc33c805-eeaf-40d2-977a-40c7fffc3b34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644793 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvcn\" (UniqueName: \"kubernetes.io/projected/cbcff487-3cc8-4e36-a9b9-edff4a99256f-kube-api-access-jtvcn\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644808 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tltw\" (UniqueName: \"kubernetes.io/projected/d1ddd030-2b42-466e-aa16-73574e2b3233-kube-api-access-5tltw\") pod \"migrator-59844c95c7-4vmwl\" (UID: \"d1ddd030-2b42-466e-aa16-73574e2b3233\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644833 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncwkv\" (UniqueName: \"kubernetes.io/projected/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-kube-api-access-ncwkv\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644853 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-serving-cert\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644867 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-proxy-tls\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644888 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a92c043-2e58-4a72-9ecb-024736e0ff21-secret-volume\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644902 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cbcff487-3cc8-4e36-a9b9-edff4a99256f-signing-cabundle\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.644960 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bb5bef45-2b4e-435c-aa48-799bb3421892-srv-cert\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.645001 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.645019 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vbb\" (UniqueName: \"kubernetes.io/projected/82eb98e0-1282-4f33-827b-4813c7399230-kube-api-access-79vbb\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.645045 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e8c775a-c533-4a4c-8346-a0a1e346e873-proxy-tls\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.645074 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70839fe8-b107-4323-a8b6-824e154cd3d8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.645089 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt5m5\" (UniqueName: \"kubernetes.io/projected/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-kube-api-access-xt5m5\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.645119 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70839fe8-b107-4323-a8b6-824e154cd3d8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.646693 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.646718 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.647367 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w84qz"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.648322 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.648997 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.650451 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.652439 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.653937 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gqvsz"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.656521 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zkdv"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.657926 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.659593 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xg6k9"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.660943 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kkfqd"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.661515 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.662146 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mwmbl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.663265 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mwmbl"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.663370 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.705230 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2dk\" (UniqueName: \"kubernetes.io/projected/881bc101-23c7-42c2-b4b9-b9983d9d4b1c-kube-api-access-cr2dk\") pod \"downloads-7954f5f757-9zxnt\" (UID: \"881bc101-23c7-42c2-b4b9-b9983d9d4b1c\") " pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.733143 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg5v5\" (UniqueName: \"kubernetes.io/projected/44f484a0-a976-4e56-82d9-84d8953664db-kube-api-access-xg5v5\") pod \"controller-manager-879f6c89f-tv6fl\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.746151 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpc65\" (UniqueName: \"kubernetes.io/projected/b65a21ed-0e4a-4c38-8603-3de6d2ae26c4-kube-api-access-qpc65\") pod \"machine-approver-56656f9798-svm8v\" (UID: \"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.746854 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82eb98e0-1282-4f33-827b-4813c7399230-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.746888 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e8c775a-c533-4a4c-8346-a0a1e346e873-images\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.746914 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.746934 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cbcff487-3cc8-4e36-a9b9-edff4a99256f-signing-key\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.746962 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2whh\" (UniqueName: \"kubernetes.io/projected/bb5bef45-2b4e-435c-aa48-799bb3421892-kube-api-access-c2whh\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.747611 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c72b1ab-baa0-45ee-a130-ccebefc3d437-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jzg7q\" (UID: \"7c72b1ab-baa0-45ee-a130-ccebefc3d437\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.747657 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a92c043-2e58-4a72-9ecb-024736e0ff21-config-volume\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.747685 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f37af3-c4d6-4cb6-a079-493192562bfa-config\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.747722 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxgjp\" (UniqueName: \"kubernetes.io/projected/2e8c775a-c533-4a4c-8346-a0a1e346e873-kube-api-access-mxgjp\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.747756 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wqvx\" (UniqueName: \"kubernetes.io/projected/7c72b1ab-baa0-45ee-a130-ccebefc3d437-kube-api-access-4wqvx\") pod \"package-server-manager-789f6589d5-jzg7q\" (UID: \"7c72b1ab-baa0-45ee-a130-ccebefc3d437\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.748293 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70839fe8-b107-4323-a8b6-824e154cd3d8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.748333 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12f37af3-c4d6-4cb6-a079-493192562bfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.748354 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc33c805-eeaf-40d2-977a-40c7fffc3b34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.748381 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtvcn\" (UniqueName: \"kubernetes.io/projected/cbcff487-3cc8-4e36-a9b9-edff4a99256f-kube-api-access-jtvcn\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.748400 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tltw\" (UniqueName: \"kubernetes.io/projected/d1ddd030-2b42-466e-aa16-73574e2b3233-kube-api-access-5tltw\") pod \"migrator-59844c95c7-4vmwl\" (UID: \"d1ddd030-2b42-466e-aa16-73574e2b3233\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.748420 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwkv\" (UniqueName: \"kubernetes.io/projected/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-kube-api-access-ncwkv\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749251 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e8c775a-c533-4a4c-8346-a0a1e346e873-images\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749304 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-serving-cert\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749324 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-proxy-tls\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749348 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a92c043-2e58-4a72-9ecb-024736e0ff21-secret-volume\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749367 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cbcff487-3cc8-4e36-a9b9-edff4a99256f-signing-cabundle\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749621 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f37af3-c4d6-4cb6-a079-493192562bfa-config\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749689 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bb5bef45-2b4e-435c-aa48-799bb3421892-srv-cert\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749711 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749741 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vbb\" (UniqueName: \"kubernetes.io/projected/82eb98e0-1282-4f33-827b-4813c7399230-kube-api-access-79vbb\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749764 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e8c775a-c533-4a4c-8346-a0a1e346e873-proxy-tls\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749789 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70839fe8-b107-4323-a8b6-824e154cd3d8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749809 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt5m5\" (UniqueName: \"kubernetes.io/projected/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-kube-api-access-xt5m5\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749825 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70839fe8-b107-4323-a8b6-824e154cd3d8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749846 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12f37af3-c4d6-4cb6-a079-493192562bfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.749862 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bb5bef45-2b4e-435c-aa48-799bb3421892-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.750923 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjlz\" (UniqueName: \"kubernetes.io/projected/bc33c805-eeaf-40d2-977a-40c7fffc3b34-kube-api-access-7gjlz\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.750999 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82eb98e0-1282-4f33-827b-4813c7399230-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.751060 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.751094 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdwm\" (UniqueName: \"kubernetes.io/projected/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-kube-api-access-ftdwm\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.751178 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.751184 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e8c775a-c533-4a4c-8346-a0a1e346e873-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.751303 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-config\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.751952 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qbk\" (UniqueName: \"kubernetes.io/projected/6a92c043-2e58-4a72-9ecb-024736e0ff21-kube-api-access-x6qbk\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.752165 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e8c775a-c533-4a4c-8346-a0a1e346e873-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.752301 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.767442 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e8c775a-c533-4a4c-8346-a0a1e346e873-proxy-tls\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.769126 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.770121 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12f37af3-c4d6-4cb6-a079-493192562bfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.771030 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjh6j\" (UniqueName: \"kubernetes.io/projected/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-kube-api-access-gjh6j\") pod \"console-f9d7485db-4xbj9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.788806 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.809873 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.829148 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.855575 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.860056 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.864650 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-proxy-tls\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.869367 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.889001 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.910705 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.923540 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cbcff487-3cc8-4e36-a9b9-edff4a99256f-signing-key\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.928441 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.928962 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.931319 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cbcff487-3cc8-4e36-a9b9-edff4a99256f-signing-cabundle\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.948051 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.949768 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.963752 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9zxnt"] Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.968901 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 17:18:16 crc kubenswrapper[5001]: W0128 17:18:16.971694 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881bc101_23c7_42c2_b4b9_b9983d9d4b1c.slice/crio-d41e4d09f4cf738d2814f5889ce306d1ade007d809a5618dd7679d3c7fcf0ee4 WatchSource:0}: Error finding container d41e4d09f4cf738d2814f5889ce306d1ade007d809a5618dd7679d3c7fcf0ee4: Status 404 returned error can't find the container with id d41e4d09f4cf738d2814f5889ce306d1ade007d809a5618dd7679d3c7fcf0ee4 Jan 28 17:18:16 crc kubenswrapper[5001]: I0128 17:18:16.989364 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.009372 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.016435 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82eb98e0-1282-4f33-827b-4813c7399230-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.030069 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.050804 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.071002 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.078752 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82eb98e0-1282-4f33-827b-4813c7399230-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.089060 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.107628 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xbj9"] Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.109241 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 17:18:17 crc kubenswrapper[5001]: W0128 17:18:17.125966 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8fdac41_2a21_4780_98bc_8b9f6ebd0cf9.slice/crio-741e8d582f727cbdd5ad9c040945f442839c06fdcc256d9a39e02b5a16e19725 WatchSource:0}: Error finding container 741e8d582f727cbdd5ad9c040945f442839c06fdcc256d9a39e02b5a16e19725: Status 404 returned error can't find the container with id 741e8d582f727cbdd5ad9c040945f442839c06fdcc256d9a39e02b5a16e19725 Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.131901 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.147603 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tv6fl"] Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.148498 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: W0128 17:18:17.153798 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44f484a0_a976_4e56_82d9_84d8953664db.slice/crio-d522f61544d33d2781f4bfea79dfe2832cfde98f68f13a85d6fb38eaee336515 WatchSource:0}: Error finding container d522f61544d33d2781f4bfea79dfe2832cfde98f68f13a85d6fb38eaee336515: Status 404 returned error can't find the container with id d522f61544d33d2781f4bfea79dfe2832cfde98f68f13a85d6fb38eaee336515 Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.161503 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c72b1ab-baa0-45ee-a130-ccebefc3d437-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jzg7q\" (UID: \"7c72b1ab-baa0-45ee-a130-ccebefc3d437\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.169318 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.201810 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44376569-5c2b-4bb3-9153-aa4c088e7b0c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.221282 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvmwt\" (UniqueName: \"kubernetes.io/projected/ca322b78-934b-4119-a0f6-8037e473a1f9-kube-api-access-rvmwt\") pod \"machine-api-operator-5694c8668f-jk8v9\" (UID: \"ca322b78-934b-4119-a0f6-8037e473a1f9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.242811 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbs5\" (UniqueName: \"kubernetes.io/projected/344f2318-5424-4d56-9979-747c89ff11ad-kube-api-access-prbs5\") pod \"authentication-operator-69f744f599-76gt6\" (UID: \"344f2318-5424-4d56-9979-747c89ff11ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.248996 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.268813 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.280855 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.288936 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.291350 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" event={"ID":"44f484a0-a976-4e56-82d9-84d8953664db","Type":"ContainerStarted","Data":"1853c48e1eaea7acc93da29ac5b2ac4d9eaf7d705635c81af9e03c2b1c1e24d9"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.291399 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" event={"ID":"44f484a0-a976-4e56-82d9-84d8953664db","Type":"ContainerStarted","Data":"d522f61544d33d2781f4bfea79dfe2832cfde98f68f13a85d6fb38eaee336515"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.291641 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.293507 5001 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tv6fl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.293664 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.294290 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xbj9" event={"ID":"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9","Type":"ContainerStarted","Data":"835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.294325 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xbj9" event={"ID":"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9","Type":"ContainerStarted","Data":"741e8d582f727cbdd5ad9c040945f442839c06fdcc256d9a39e02b5a16e19725"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.296879 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9zxnt" event={"ID":"881bc101-23c7-42c2-b4b9-b9983d9d4b1c","Type":"ContainerStarted","Data":"ea850c917292f3f4c6e0601f61d3148bfa55259b3d6864b122157c05c2afe4e2"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.296989 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9zxnt" event={"ID":"881bc101-23c7-42c2-b4b9-b9983d9d4b1c","Type":"ContainerStarted","Data":"d41e4d09f4cf738d2814f5889ce306d1ade007d809a5618dd7679d3c7fcf0ee4"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.297172 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.298268 5001 patch_prober.go:28] interesting pod/downloads-7954f5f757-9zxnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.298586 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9zxnt" podUID="881bc101-23c7-42c2-b4b9-b9983d9d4b1c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.299922 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" event={"ID":"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4","Type":"ContainerStarted","Data":"3ed7e7acb64f30751fa37765ec846bcc91f89408aaeed1d45afd5138e1bd3a57"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.299963 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" event={"ID":"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4","Type":"ContainerStarted","Data":"97f2a01f69524575a00ae19c3f6144d019babf474cb76be70947e1edb581ab28"} Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.308443 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.316242 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.340201 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.348285 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.349030 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.368714 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.388955 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.393770 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-serving-cert\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.409068 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.412932 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-config\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.429630 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.435378 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jk8v9"] Jan 28 17:18:17 crc kubenswrapper[5001]: W0128 17:18:17.441353 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca322b78_934b_4119_a0f6_8037e473a1f9.slice/crio-528f7b4def86c4af8a88c3be39b60a9b2fe961a1b3b32b6dfc69fb0adadcd341 WatchSource:0}: Error finding container 528f7b4def86c4af8a88c3be39b60a9b2fe961a1b3b32b6dfc69fb0adadcd341: Status 404 returned error can't find the container with id 528f7b4def86c4af8a88c3be39b60a9b2fe961a1b3b32b6dfc69fb0adadcd341 Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.471752 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kkwt\" (UniqueName: \"kubernetes.io/projected/95832a2d-7e40-4e03-a731-2c8ed45384b4-kube-api-access-7kkwt\") pod \"apiserver-76f77b778f-kfhth\" (UID: \"95832a2d-7e40-4e03-a731-2c8ed45384b4\") " pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.486011 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzpm\" (UniqueName: \"kubernetes.io/projected/44376569-5c2b-4bb3-9153-aa4c088e7b0c-kube-api-access-rzzpm\") pod \"cluster-image-registry-operator-dc59b4c8b-t4nnb\" (UID: \"44376569-5c2b-4bb3-9153-aa4c088e7b0c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.505375 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bflrm\" (UniqueName: \"kubernetes.io/projected/871aed01-fb32-4ff6-ab22-a59051b53d69-kube-api-access-bflrm\") pod \"openshift-config-operator-7777fb866f-d8rr7\" (UID: \"871aed01-fb32-4ff6-ab22-a59051b53d69\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.524961 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c5v6\" (UniqueName: \"kubernetes.io/projected/688d529a-47a0-40ba-86db-6ae47a10f578-kube-api-access-7c5v6\") pod \"console-operator-58897d9998-dkz7l\" (UID: \"688d529a-47a0-40ba-86db-6ae47a10f578\") " pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.539610 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.546441 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnws\" (UniqueName: \"kubernetes.io/projected/07db7890-6e55-4dd2-988b-084d8e060c7b-kube-api-access-clnws\") pod \"openshift-apiserver-operator-796bbdcf4f-r5w7n\" (UID: \"07db7890-6e55-4dd2-988b-084d8e060c7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.567294 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwg6\" (UniqueName: \"kubernetes.io/projected/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-kube-api-access-6zwg6\") pod \"route-controller-manager-6576b87f9c-8kk4v\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.567601 5001 request.go:700] Waited for 1.009796379s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dcollect-profiles-config&limit=500&resourceVersion=0 Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.569436 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.570908 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a92c043-2e58-4a72-9ecb-024736e0ff21-config-volume\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.607959 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.608839 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89zl\" (UniqueName: \"kubernetes.io/projected/ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe-kube-api-access-q89zl\") pod \"cluster-samples-operator-665b6dd947-l4jqg\" (UID: \"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.610458 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.622015 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.625162 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bb5bef45-2b4e-435c-aa48-799bb3421892-srv-cert\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.628624 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.632968 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.666440 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.682366 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bb5bef45-2b4e-435c-aa48-799bb3421892-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.683256 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a92c043-2e58-4a72-9ecb-024736e0ff21-secret-volume\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.684872 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grcbz\" (UniqueName: \"kubernetes.io/projected/c14628eb-e612-43d2-b299-88ebb92f22a0-kube-api-access-grcbz\") pod \"apiserver-7bbb656c7d-9jrlf\" (UID: \"c14628eb-e612-43d2-b299-88ebb92f22a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.685675 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.707126 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92d5\" (UniqueName: \"kubernetes.io/projected/d0b3d8c9-d6a5-42b0-8620-157672e4090f-kube-api-access-q92d5\") pod \"openshift-controller-manager-operator-756b6f6bc6-mc9xw\" (UID: \"d0b3d8c9-d6a5-42b0-8620-157672e4090f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.707260 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.726376 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwwkq\" (UniqueName: \"kubernetes.io/projected/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-kube-api-access-nwwkq\") pod \"oauth-openshift-558db77b4-fhtpl\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.734403 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.746642 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:17 crc kubenswrapper[5001]: E0128 17:18:17.748657 5001 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 17:18:17 crc kubenswrapper[5001]: E0128 17:18:17.748724 5001 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 17:18:17 crc kubenswrapper[5001]: E0128 17:18:17.748752 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70839fe8-b107-4323-a8b6-824e154cd3d8-serving-cert podName:70839fe8-b107-4323-a8b6-824e154cd3d8 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:18.248731444 +0000 UTC m=+144.416519674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70839fe8-b107-4323-a8b6-824e154cd3d8-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" (UID: "70839fe8-b107-4323-a8b6-824e154cd3d8") : failed to sync secret cache: timed out waiting for the condition Jan 28 17:18:17 crc kubenswrapper[5001]: E0128 17:18:17.748790 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc33c805-eeaf-40d2-977a-40c7fffc3b34-control-plane-machine-set-operator-tls podName:bc33c805-eeaf-40d2-977a-40c7fffc3b34 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:18.248768955 +0000 UTC m=+144.416557255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/bc33c805-eeaf-40d2-977a-40c7fffc3b34-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-2h5l2" (UID: "bc33c805-eeaf-40d2-977a-40c7fffc3b34") : failed to sync secret cache: timed out waiting for the condition Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.750572 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 17:18:17 crc kubenswrapper[5001]: E0128 17:18:17.751943 5001 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 17:18:17 crc kubenswrapper[5001]: E0128 17:18:17.752015 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70839fe8-b107-4323-a8b6-824e154cd3d8-config podName:70839fe8-b107-4323-a8b6-824e154cd3d8 nodeName:}" failed. No retries permitted until 2026-01-28 17:18:18.251995668 +0000 UTC m=+144.419783898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/70839fe8-b107-4323-a8b6-824e154cd3d8-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" (UID: "70839fe8-b107-4323-a8b6-824e154cd3d8") : failed to sync configmap cache: timed out waiting for the condition Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.776967 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.791558 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.793186 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76gt6"] Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.810650 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.825719 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.828607 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 17:18:17 crc kubenswrapper[5001]: W0128 17:18:17.837185 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod344f2318_5424_4d56_9979_747c89ff11ad.slice/crio-d2e1eb05c7fef8544675cbb4fe4e793b3ad4d9c30a2e3043d42bb62dc7798c0e WatchSource:0}: Error finding container d2e1eb05c7fef8544675cbb4fe4e793b3ad4d9c30a2e3043d42bb62dc7798c0e: Status 404 returned error can't find the container with id d2e1eb05c7fef8544675cbb4fe4e793b3ad4d9c30a2e3043d42bb62dc7798c0e Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.853138 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.853583 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.867300 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.869516 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.889938 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.893939 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7"] Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.899681 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.910195 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.929105 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.949786 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.970089 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 17:18:17 crc kubenswrapper[5001]: I0128 17:18:17.990079 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.012772 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.028479 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.049350 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.078250 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kfhth"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.097539 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.110167 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.130161 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.153560 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.169741 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.185871 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.189107 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.207601 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dkz7l"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.210615 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.218521 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.231821 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.250712 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.273327 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.279340 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70839fe8-b107-4323-a8b6-824e154cd3d8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.281678 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70839fe8-b107-4323-a8b6-824e154cd3d8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.281736 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc33c805-eeaf-40d2-977a-40c7fffc3b34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.280850 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70839fe8-b107-4323-a8b6-824e154cd3d8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.284515 5001 csr.go:261] certificate signing request csr-8dcd6 is approved, waiting to be issued Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.292560 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.293035 5001 csr.go:257] certificate signing request csr-8dcd6 is issued Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.296452 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.299505 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70839fe8-b107-4323-a8b6-824e154cd3d8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.302204 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc33c805-eeaf-40d2-977a-40c7fffc3b34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.308843 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.316363 5001 generic.go:334] "Generic (PLEG): container finished" podID="871aed01-fb32-4ff6-ab22-a59051b53d69" containerID="d53b3e646468736e16f7c32ae0ef72b3dca6f47708c5087e718000aee5be7b97" exitCode=0 Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.316451 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" event={"ID":"871aed01-fb32-4ff6-ab22-a59051b53d69","Type":"ContainerDied","Data":"d53b3e646468736e16f7c32ae0ef72b3dca6f47708c5087e718000aee5be7b97"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.316484 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" event={"ID":"871aed01-fb32-4ff6-ab22-a59051b53d69","Type":"ContainerStarted","Data":"3f23eef1647e1eca6be6d5e042ad0d458f6c1d61fa0acee196de844db3d3e902"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.318517 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" event={"ID":"344f2318-5424-4d56-9979-747c89ff11ad","Type":"ContainerStarted","Data":"8010636a3a3d15160e826ce20876311c76602cd991327eae273a01737d033625"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.318596 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" event={"ID":"344f2318-5424-4d56-9979-747c89ff11ad","Type":"ContainerStarted","Data":"d2e1eb05c7fef8544675cbb4fe4e793b3ad4d9c30a2e3043d42bb62dc7798c0e"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.322116 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" event={"ID":"688d529a-47a0-40ba-86db-6ae47a10f578","Type":"ContainerStarted","Data":"087dab0cf5f5c087ccb27db025990897eec0300fd882cb4ed9ac38ceb59f232a"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.323340 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" event={"ID":"44376569-5c2b-4bb3-9153-aa4c088e7b0c","Type":"ContainerStarted","Data":"6517e45736b583d8a29614e27d6b08990a9b8b636f468a8553aa3f43aa8b5f20"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.324855 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" event={"ID":"95832a2d-7e40-4e03-a731-2c8ed45384b4","Type":"ContainerStarted","Data":"88ac90f2a1b4e34bbe976cbda0f7805f32df439fba524058bd0c55c773d9e372"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.325638 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" event={"ID":"07db7890-6e55-4dd2-988b-084d8e060c7b","Type":"ContainerStarted","Data":"12ed02ec856e1989be712aae4bd503c869e8a258b705fee4354b2a14ed1c8937"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.327670 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" event={"ID":"ca322b78-934b-4119-a0f6-8037e473a1f9","Type":"ContainerStarted","Data":"ac11ff6ba48940792c0c1658be6d870ae058332b413edf3c3dda784a8c392526"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.327694 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" event={"ID":"ca322b78-934b-4119-a0f6-8037e473a1f9","Type":"ContainerStarted","Data":"542c420bc01bed7004d78f332663c8f55dafce57614b483e150d26259bc13747"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.327705 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" event={"ID":"ca322b78-934b-4119-a0f6-8037e473a1f9","Type":"ContainerStarted","Data":"528f7b4def86c4af8a88c3be39b60a9b2fe961a1b3b32b6dfc69fb0adadcd341"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.329534 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" event={"ID":"b65a21ed-0e4a-4c38-8603-3de6d2ae26c4","Type":"ContainerStarted","Data":"50589349ff34882cf552e9656cc79bb2c7e4b6052e53ee8db16fcd91d5abac67"} Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.330528 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.330854 5001 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tv6fl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.330893 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.332055 5001 patch_prober.go:28] interesting pod/downloads-7954f5f757-9zxnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.332082 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9zxnt" podUID="881bc101-23c7-42c2-b4b9-b9983d9d4b1c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.349561 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.370921 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.378634 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.388227 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.388521 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.408398 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.429536 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.447804 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.450125 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.464847 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhtpl"] Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.469795 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: W0128 17:18:18.471813 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0b3d8c9_d6a5_42b0_8620_157672e4090f.slice/crio-3b3a1950d5d44e40dcab752001bd941976f39bc9977728103033ea3a6b0e33dd WatchSource:0}: Error finding container 3b3a1950d5d44e40dcab752001bd941976f39bc9977728103033ea3a6b0e33dd: Status 404 returned error can't find the container with id 3b3a1950d5d44e40dcab752001bd941976f39bc9977728103033ea3a6b0e33dd Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.499016 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.509411 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.528616 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.548571 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.568188 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.587085 5001 request.go:700] Waited for 1.947627228s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.588892 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.613156 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.629591 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.649780 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.669004 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.689161 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.710843 5001 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.729764 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.748817 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.810070 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2whh\" (UniqueName: \"kubernetes.io/projected/bb5bef45-2b4e-435c-aa48-799bb3421892-kube-api-access-c2whh\") pod \"olm-operator-6b444d44fb-vr6jl\" (UID: \"bb5bef45-2b4e-435c-aa48-799bb3421892\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.834489 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wqvx\" (UniqueName: \"kubernetes.io/projected/7c72b1ab-baa0-45ee-a130-ccebefc3d437-kube-api-access-4wqvx\") pod \"package-server-manager-789f6589d5-jzg7q\" (UID: \"7c72b1ab-baa0-45ee-a130-ccebefc3d437\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.858219 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxgjp\" (UniqueName: \"kubernetes.io/projected/2e8c775a-c533-4a4c-8346-a0a1e346e873-kube-api-access-mxgjp\") pod \"machine-config-operator-74547568cd-n8khk\" (UID: \"2e8c775a-c533-4a4c-8346-a0a1e346e873\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.871568 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwkv\" (UniqueName: \"kubernetes.io/projected/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-kube-api-access-ncwkv\") pod \"marketplace-operator-79b997595-2zkdv\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.897277 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtvcn\" (UniqueName: \"kubernetes.io/projected/cbcff487-3cc8-4e36-a9b9-edff4a99256f-kube-api-access-jtvcn\") pod \"service-ca-9c57cc56f-x5hd4\" (UID: \"cbcff487-3cc8-4e36-a9b9-edff4a99256f\") " pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.912877 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tltw\" (UniqueName: \"kubernetes.io/projected/d1ddd030-2b42-466e-aa16-73574e2b3233-kube-api-access-5tltw\") pod \"migrator-59844c95c7-4vmwl\" (UID: \"d1ddd030-2b42-466e-aa16-73574e2b3233\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.913176 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.918135 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.931135 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vbb\" (UniqueName: \"kubernetes.io/projected/82eb98e0-1282-4f33-827b-4813c7399230-kube-api-access-79vbb\") pod \"kube-storage-version-migrator-operator-b67b599dd-fg7p5\" (UID: \"82eb98e0-1282-4f33-827b-4813c7399230\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.944301 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:18 crc kubenswrapper[5001]: I0128 17:18:18.951730 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70839fe8-b107-4323-a8b6-824e154cd3d8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdnlg\" (UID: \"70839fe8-b107-4323-a8b6-824e154cd3d8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.013198 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.016838 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt5m5\" (UniqueName: \"kubernetes.io/projected/7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c-kube-api-access-xt5m5\") pod \"service-ca-operator-777779d784-8ntx7\" (UID: \"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.034438 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qbk\" (UniqueName: \"kubernetes.io/projected/6a92c043-2e58-4a72-9ecb-024736e0ff21-kube-api-access-x6qbk\") pod \"collect-profiles-29493675-8hkbb\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.040249 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdwm\" (UniqueName: \"kubernetes.io/projected/4fd6ffbb-add2-4c51-93a3-0ed4830085ef-kube-api-access-ftdwm\") pod \"machine-config-controller-84d6567774-9sq5q\" (UID: \"4fd6ffbb-add2-4c51-93a3-0ed4830085ef\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.042362 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjlz\" (UniqueName: \"kubernetes.io/projected/bc33c805-eeaf-40d2-977a-40c7fffc3b34-kube-api-access-7gjlz\") pod \"control-plane-machine-set-operator-78cbb6b69f-2h5l2\" (UID: \"bc33c805-eeaf-40d2-977a-40c7fffc3b34\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.047986 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12f37af3-c4d6-4cb6-a079-493192562bfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zf27q\" (UID: \"12f37af3-c4d6-4cb6-a079-493192562bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.054771 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.065048 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.073955 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114410 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/291de1f0-1f01-45e9-bdbe-5d7eb9e081ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wx758\" (UID: \"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114465 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-trusted-ca\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114509 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36cbdaab-10af-401c-8ec0-867a5e82dc3d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114540 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptcdk\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-kube-api-access-ptcdk\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114567 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfhk9\" (UniqueName: \"kubernetes.io/projected/cc174eee-37cc-4dca-9a65-fa0f48b80588-kube-api-access-sfhk9\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114595 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-bound-sa-token\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114631 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36cbdaab-10af-401c-8ec0-867a5e82dc3d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114653 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc174eee-37cc-4dca-9a65-fa0f48b80588-apiservice-cert\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114702 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114730 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jrp\" (UniqueName: \"kubernetes.io/projected/291de1f0-1f01-45e9-bdbe-5d7eb9e081ba-kube-api-access-m2jrp\") pod \"multus-admission-controller-857f4d67dd-wx758\" (UID: \"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114753 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cc174eee-37cc-4dca-9a65-fa0f48b80588-tmpfs\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114777 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-tls\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114834 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-certificates\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.114859 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc174eee-37cc-4dca-9a65-fa0f48b80588-webhook-cert\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.115273 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:19.615258481 +0000 UTC m=+145.783046721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.178032 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.192032 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.208937 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221002 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221235 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bb1a701d-2e02-47ef-a368-51a1bc211dda-certs\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221302 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fg6\" (UniqueName: \"kubernetes.io/projected/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-kube-api-access-p6fg6\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221326 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a8f23f7-d3a0-4c33-9237-c00812de2229-config-volume\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221366 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-client\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221385 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8253e7b-5b41-4131-827f-b7b31be8959a-config\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221434 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-plugins-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221467 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f9807f0-3235-44a0-8fc2-ea33f4dfa778-metrics-tls\") pod \"dns-operator-744455d44c-gqvsz\" (UID: \"1f9807f0-3235-44a0-8fc2-ea33f4dfa778\") " pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221518 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgqx\" (UniqueName: \"kubernetes.io/projected/6a8f23f7-d3a0-4c33-9237-c00812de2229-kube-api-access-7hgqx\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221544 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-certificates\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221583 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlt2f\" (UniqueName: \"kubernetes.io/projected/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-kube-api-access-vlt2f\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221633 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc174eee-37cc-4dca-9a65-fa0f48b80588-webhook-cert\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221657 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8253e7b-5b41-4131-827f-b7b31be8959a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221755 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e36eb22f-82ef-47d9-9418-ae58240aa597-profile-collector-cert\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221806 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/291de1f0-1f01-45e9-bdbe-5d7eb9e081ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wx758\" (UID: \"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221885 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-trusted-ca\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.221992 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36cbdaab-10af-401c-8ec0-867a5e82dc3d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222150 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26dg\" (UniqueName: \"kubernetes.io/projected/d46a8dca-7cc0-48b4-8626-6a516d1f502e-kube-api-access-t26dg\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222170 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8253e7b-5b41-4131-827f-b7b31be8959a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222196 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2wp\" (UniqueName: \"kubernetes.io/projected/2138a115-5bef-4076-894f-060817d5d343-kube-api-access-8t2wp\") pod \"ingress-canary-xg6k9\" (UID: \"2138a115-5bef-4076-894f-060817d5d343\") " pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222230 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptcdk\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-kube-api-access-ptcdk\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222306 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclcx\" (UniqueName: \"kubernetes.io/projected/575106e1-5f5b-4e85-973b-8102a88f91b5-kube-api-access-jclcx\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222328 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtlf\" (UniqueName: \"kubernetes.io/projected/1f9807f0-3235-44a0-8fc2-ea33f4dfa778-kube-api-access-jbtlf\") pod \"dns-operator-744455d44c-gqvsz\" (UID: \"1f9807f0-3235-44a0-8fc2-ea33f4dfa778\") " pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222350 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-default-certificate\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222372 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-csi-data-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222422 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfhk9\" (UniqueName: \"kubernetes.io/projected/cc174eee-37cc-4dca-9a65-fa0f48b80588-kube-api-access-sfhk9\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222444 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ccz\" (UniqueName: \"kubernetes.io/projected/e36eb22f-82ef-47d9-9418-ae58240aa597-kube-api-access-78ccz\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222463 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-ca\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222516 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-serving-cert\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222561 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-socket-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222611 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frwx9\" (UniqueName: \"kubernetes.io/projected/bb1a701d-2e02-47ef-a368-51a1bc211dda-kube-api-access-frwx9\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222647 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-bound-sa-token\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222683 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-trusted-ca\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222717 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-mountpoint-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222753 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222784 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36cbdaab-10af-401c-8ec0-867a5e82dc3d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222804 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bb1a701d-2e02-47ef-a368-51a1bc211dda-node-bootstrap-token\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222823 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-config\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222882 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc174eee-37cc-4dca-9a65-fa0f48b80588-apiservice-cert\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.222969 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-stats-auth\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223024 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2138a115-5bef-4076-894f-060817d5d343-cert\") pod \"ingress-canary-xg6k9\" (UID: \"2138a115-5bef-4076-894f-060817d5d343\") " pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223079 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-metrics-certs\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223142 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2jrp\" (UniqueName: \"kubernetes.io/projected/291de1f0-1f01-45e9-bdbe-5d7eb9e081ba-kube-api-access-m2jrp\") pod \"multus-admission-controller-857f4d67dd-wx758\" (UID: \"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223165 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cc174eee-37cc-4dca-9a65-fa0f48b80588-tmpfs\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223190 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-metrics-tls\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223212 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-service-ca\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223234 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-registration-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223298 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-tls\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223317 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a8f23f7-d3a0-4c33-9237-c00812de2229-metrics-tls\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223338 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a8dca-7cc0-48b4-8626-6a516d1f502e-service-ca-bundle\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.223375 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e36eb22f-82ef-47d9-9418-ae58240aa597-srv-cert\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.223661 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:19.723635051 +0000 UTC m=+145.891423321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.225711 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.232031 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-trusted-ca\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.234160 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-certificates\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.239533 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.240693 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cc174eee-37cc-4dca-9a65-fa0f48b80588-tmpfs\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.245832 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36cbdaab-10af-401c-8ec0-867a5e82dc3d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.253175 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-tls\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.263707 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/291de1f0-1f01-45e9-bdbe-5d7eb9e081ba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wx758\" (UID: \"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.268409 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36cbdaab-10af-401c-8ec0-867a5e82dc3d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.269567 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc174eee-37cc-4dca-9a65-fa0f48b80588-apiservice-cert\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.269797 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc174eee-37cc-4dca-9a65-fa0f48b80588-webhook-cert\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.276227 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.306385 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 17:13:18 +0000 UTC, rotation deadline is 2026-10-30 10:40:18.232120225 +0000 UTC Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.306419 5001 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6593h21m58.925703718s for next certificate rotation Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.310320 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q"] Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.317519 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-bound-sa-token\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.319542 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfhk9\" (UniqueName: \"kubernetes.io/projected/cc174eee-37cc-4dca-9a65-fa0f48b80588-kube-api-access-sfhk9\") pod \"packageserver-d55dfcdfc-rhbss\" (UID: \"cc174eee-37cc-4dca-9a65-fa0f48b80588\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337727 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337767 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-config\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337800 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bb1a701d-2e02-47ef-a368-51a1bc211dda-node-bootstrap-token\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337825 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-stats-auth\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337852 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2138a115-5bef-4076-894f-060817d5d343-cert\") pod \"ingress-canary-xg6k9\" (UID: \"2138a115-5bef-4076-894f-060817d5d343\") " pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337872 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-metrics-certs\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337897 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-metrics-tls\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337918 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-service-ca\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337942 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-registration-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.337968 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338021 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a8f23f7-d3a0-4c33-9237-c00812de2229-metrics-tls\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338048 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a8dca-7cc0-48b4-8626-6a516d1f502e-service-ca-bundle\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338069 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e36eb22f-82ef-47d9-9418-ae58240aa597-srv-cert\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338086 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bb1a701d-2e02-47ef-a368-51a1bc211dda-certs\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338110 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fg6\" (UniqueName: \"kubernetes.io/projected/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-kube-api-access-p6fg6\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338132 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a8f23f7-d3a0-4c33-9237-c00812de2229-config-volume\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338147 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-client\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338164 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8253e7b-5b41-4131-827f-b7b31be8959a-config\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338180 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-plugins-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338197 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f9807f0-3235-44a0-8fc2-ea33f4dfa778-metrics-tls\") pod \"dns-operator-744455d44c-gqvsz\" (UID: \"1f9807f0-3235-44a0-8fc2-ea33f4dfa778\") " pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338222 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgqx\" (UniqueName: \"kubernetes.io/projected/6a8f23f7-d3a0-4c33-9237-c00812de2229-kube-api-access-7hgqx\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338241 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlt2f\" (UniqueName: \"kubernetes.io/projected/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-kube-api-access-vlt2f\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338260 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8253e7b-5b41-4131-827f-b7b31be8959a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338279 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e36eb22f-82ef-47d9-9418-ae58240aa597-profile-collector-cert\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338319 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2wp\" (UniqueName: \"kubernetes.io/projected/2138a115-5bef-4076-894f-060817d5d343-kube-api-access-8t2wp\") pod \"ingress-canary-xg6k9\" (UID: \"2138a115-5bef-4076-894f-060817d5d343\") " pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338333 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t26dg\" (UniqueName: \"kubernetes.io/projected/d46a8dca-7cc0-48b4-8626-6a516d1f502e-kube-api-access-t26dg\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338348 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8253e7b-5b41-4131-827f-b7b31be8959a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338369 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtlf\" (UniqueName: \"kubernetes.io/projected/1f9807f0-3235-44a0-8fc2-ea33f4dfa778-kube-api-access-jbtlf\") pod \"dns-operator-744455d44c-gqvsz\" (UID: \"1f9807f0-3235-44a0-8fc2-ea33f4dfa778\") " pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338385 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclcx\" (UniqueName: \"kubernetes.io/projected/575106e1-5f5b-4e85-973b-8102a88f91b5-kube-api-access-jclcx\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338403 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-default-certificate\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338417 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-csi-data-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338435 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78ccz\" (UniqueName: \"kubernetes.io/projected/e36eb22f-82ef-47d9-9418-ae58240aa597-kube-api-access-78ccz\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338451 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-ca\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338467 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-serving-cert\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338485 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-socket-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338502 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frwx9\" (UniqueName: \"kubernetes.io/projected/bb1a701d-2e02-47ef-a368-51a1bc211dda-kube-api-access-frwx9\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338520 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-trusted-ca\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338536 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-mountpoint-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.338622 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-mountpoint-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.339353 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-config\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.354787 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptcdk\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-kube-api-access-ptcdk\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.355829 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a8dca-7cc0-48b4-8626-6a516d1f502e-service-ca-bundle\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.356318 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f9807f0-3235-44a0-8fc2-ea33f4dfa778-metrics-tls\") pod \"dns-operator-744455d44c-gqvsz\" (UID: \"1f9807f0-3235-44a0-8fc2-ea33f4dfa778\") " pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.357066 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a8f23f7-d3a0-4c33-9237-c00812de2229-config-volume\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.357539 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2jrp\" (UniqueName: \"kubernetes.io/projected/291de1f0-1f01-45e9-bdbe-5d7eb9e081ba-kube-api-access-m2jrp\") pod \"multus-admission-controller-857f4d67dd-wx758\" (UID: \"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.357589 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e36eb22f-82ef-47d9-9418-ae58240aa597-profile-collector-cert\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.358182 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-registration-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.358714 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-service-ca\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.359133 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-plugins-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.361112 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:19.861097825 +0000 UTC m=+146.028886055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.369769 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-socket-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.370247 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/575106e1-5f5b-4e85-973b-8102a88f91b5-csi-data-dir\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.371282 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-trusted-ca\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.372017 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bb1a701d-2e02-47ef-a368-51a1bc211dda-node-bootstrap-token\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.375105 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-ca\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.377228 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8253e7b-5b41-4131-827f-b7b31be8959a-config\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.381569 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-etcd-client\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.381615 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8253e7b-5b41-4131-827f-b7b31be8959a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.391763 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zkdv"] Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.397600 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-serving-cert\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.398401 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e36eb22f-82ef-47d9-9418-ae58240aa597-srv-cert\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.398828 5001 generic.go:334] "Generic (PLEG): container finished" podID="95832a2d-7e40-4e03-a731-2c8ed45384b4" containerID="67d8d24354db524ab4dfad43e546aefb073f5e2e216c55bee464a6ddd84c1e65" exitCode=0 Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.398928 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" event={"ID":"95832a2d-7e40-4e03-a731-2c8ed45384b4","Type":"ContainerDied","Data":"67d8d24354db524ab4dfad43e546aefb073f5e2e216c55bee464a6ddd84c1e65"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.399012 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2138a115-5bef-4076-894f-060817d5d343-cert\") pod \"ingress-canary-xg6k9\" (UID: \"2138a115-5bef-4076-894f-060817d5d343\") " pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.404393 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-stats-auth\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.408716 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a8f23f7-d3a0-4c33-9237-c00812de2229-metrics-tls\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.409061 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bb1a701d-2e02-47ef-a368-51a1bc211dda-certs\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.418577 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-default-certificate\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.419799 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d46a8dca-7cc0-48b4-8626-6a516d1f502e-metrics-certs\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.426592 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.427892 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-metrics-tls\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.435452 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" event={"ID":"688d529a-47a0-40ba-86db-6ae47a10f578","Type":"ContainerStarted","Data":"8c763141bd4fc79763e62f136476cec2ba04b14e1e1debb82f773ffc9ef8c9d7"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.435607 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclcx\" (UniqueName: \"kubernetes.io/projected/575106e1-5f5b-4e85-973b-8102a88f91b5-kube-api-access-jclcx\") pod \"csi-hostpathplugin-mwmbl\" (UID: \"575106e1-5f5b-4e85-973b-8102a88f91b5\") " pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.435765 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgqx\" (UniqueName: \"kubernetes.io/projected/6a8f23f7-d3a0-4c33-9237-c00812de2229-kube-api-access-7hgqx\") pod \"dns-default-w84qz\" (UID: \"6a8f23f7-d3a0-4c33-9237-c00812de2229\") " pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.435848 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.435900 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlt2f\" (UniqueName: \"kubernetes.io/projected/c003b05a-d6fd-4e04-9d75-5fa9551fc52b-kube-api-access-vlt2f\") pod \"ingress-operator-5b745b69d9-fxfc9\" (UID: \"c003b05a-d6fd-4e04-9d75-5fa9551fc52b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.438729 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.439756 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.440326 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:19.940310109 +0000 UTC m=+146.108098339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.441612 5001 patch_prober.go:28] interesting pod/console-operator-58897d9998-dkz7l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.441641 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" podUID="688d529a-47a0-40ba-86db-6ae47a10f578" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.443481 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" event={"ID":"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a","Type":"ContainerStarted","Data":"fc686689a6157690ca98c1a4ac2ec563379c9fb5c2c3a7c14cdc216b541a9baf"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.443520 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" event={"ID":"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a","Type":"ContainerStarted","Data":"ea1049937d36ad5221412ee4b0eca686a57b5ce7f5e4648c7e9e98b7ed0dfa27"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.444219 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.445677 5001 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fhtpl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.445739 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" podUID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.446546 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" event={"ID":"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe","Type":"ContainerStarted","Data":"41097eba2e1c6a95b83c4453392d329823ccd2ec9c0e999b7e14762212ba402a"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.446585 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" event={"ID":"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe","Type":"ContainerStarted","Data":"dbcd1e5ce28dea42e78293200f3d244cdd1aa45f468729d98124d8c4acab3e55"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.446598 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" event={"ID":"ac2c17b5-e7c6-4163-9f4c-ac7a989bf9fe","Type":"ContainerStarted","Data":"b4ee544e9351e48aedfbcf0129abaa3505c72f03bf120fc4e15ba0d2531d3e50"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.455173 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8253e7b-5b41-4131-827f-b7b31be8959a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dvkvs\" (UID: \"e8253e7b-5b41-4131-827f-b7b31be8959a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.458159 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" event={"ID":"07db7890-6e55-4dd2-988b-084d8e060c7b","Type":"ContainerStarted","Data":"774a2e117a959cddd6006d86b736d78a2af2f432591c932ed8336cf3e8d349e5"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.476923 5001 generic.go:334] "Generic (PLEG): container finished" podID="c14628eb-e612-43d2-b299-88ebb92f22a0" containerID="bce043c710b7a75436114825d188db0dd21d50c88261d8f47e23dc7da897184c" exitCode=0 Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.477206 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" event={"ID":"c14628eb-e612-43d2-b299-88ebb92f22a0","Type":"ContainerDied","Data":"bce043c710b7a75436114825d188db0dd21d50c88261d8f47e23dc7da897184c"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.477261 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" event={"ID":"c14628eb-e612-43d2-b299-88ebb92f22a0","Type":"ContainerStarted","Data":"00d36b26fab509a393bf02e7248c0248d23ce1e5e355e07c8b6a53049d12caf4"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.488187 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26dg\" (UniqueName: \"kubernetes.io/projected/d46a8dca-7cc0-48b4-8626-6a516d1f502e-kube-api-access-t26dg\") pod \"router-default-5444994796-dww9c\" (UID: \"d46a8dca-7cc0-48b4-8626-6a516d1f502e\") " pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.496934 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" event={"ID":"871aed01-fb32-4ff6-ab22-a59051b53d69","Type":"ContainerStarted","Data":"5b7dbb6023a0dd105574a1971970d3da78c95ef3a4e8a495df0ffbe749a27422"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.497836 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.497856 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.502640 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" event={"ID":"44376569-5c2b-4bb3-9153-aa4c088e7b0c","Type":"ContainerStarted","Data":"72de29dd7a743dd3336f778f3548dda10e63869cbdf5047f1b8abe5180fd87f6"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.507722 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" event={"ID":"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3","Type":"ContainerStarted","Data":"ce9015950af2499718d12c440724dc51f06b96becc67eeb052377cd9ea3a8e24"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.507783 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" event={"ID":"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3","Type":"ContainerStarted","Data":"5f9d9415ec3ab91511df9dd07a0682c093bfddb36cf85dada13a9d67ec5f5c45"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.508943 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.509880 5001 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8kk4v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.509932 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.510423 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2wp\" (UniqueName: \"kubernetes.io/projected/2138a115-5bef-4076-894f-060817d5d343-kube-api-access-8t2wp\") pod \"ingress-canary-xg6k9\" (UID: \"2138a115-5bef-4076-894f-060817d5d343\") " pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.513274 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" event={"ID":"d0b3d8c9-d6a5-42b0-8620-157672e4090f","Type":"ContainerStarted","Data":"fcf274a9e033e14d9396632ad447a69c9a754d449372c8a4fb994ece25b0c4fd"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.513325 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" event={"ID":"d0b3d8c9-d6a5-42b0-8620-157672e4090f","Type":"ContainerStarted","Data":"3b3a1950d5d44e40dcab752001bd941976f39bc9977728103033ea3a6b0e33dd"} Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.534245 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtlf\" (UniqueName: \"kubernetes.io/projected/1f9807f0-3235-44a0-8fc2-ea33f4dfa778-kube-api-access-jbtlf\") pod \"dns-operator-744455d44c-gqvsz\" (UID: \"1f9807f0-3235-44a0-8fc2-ea33f4dfa778\") " pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.541804 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.543603 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.043585719 +0000 UTC m=+146.211373949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.553177 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.555886 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frwx9\" (UniqueName: \"kubernetes.io/projected/bb1a701d-2e02-47ef-a368-51a1bc211dda-kube-api-access-frwx9\") pod \"machine-config-server-kkfqd\" (UID: \"bb1a701d-2e02-47ef-a368-51a1bc211dda\") " pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.575101 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fg6\" (UniqueName: \"kubernetes.io/projected/458a9e44-85a3-452c-9bec-0fd8b09a9ba8-kube-api-access-p6fg6\") pod \"etcd-operator-b45778765-z8vjx\" (UID: \"458a9e44-85a3-452c-9bec-0fd8b09a9ba8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.587519 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78ccz\" (UniqueName: \"kubernetes.io/projected/e36eb22f-82ef-47d9-9418-ae58240aa597-kube-api-access-78ccz\") pod \"catalog-operator-68c6474976-xxzgv\" (UID: \"e36eb22f-82ef-47d9-9418-ae58240aa597\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.593573 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.599395 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.606192 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.620192 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.632409 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.644784 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.645948 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.145926375 +0000 UTC m=+146.313714605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.654138 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.665721 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.673073 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xg6k9" Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.682418 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkfqd" Jan 28 17:18:19 crc kubenswrapper[5001]: W0128 17:18:19.733243 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf3e2eda_99b8_401a_bfe3_4ebc0ba7628e.slice/crio-3e8f0e0082effeee5b0140c5f9c0998c57a35e05c20cb18bb89f1b48dee986ba WatchSource:0}: Error finding container 3e8f0e0082effeee5b0140c5f9c0998c57a35e05c20cb18bb89f1b48dee986ba: Status 404 returned error can't find the container with id 3e8f0e0082effeee5b0140c5f9c0998c57a35e05c20cb18bb89f1b48dee986ba Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.747584 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.747918 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.247904951 +0000 UTC m=+146.415693181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.848526 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.848884 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.348868392 +0000 UTC m=+146.516656622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:19 crc kubenswrapper[5001]: I0128 17:18:19.949766 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:19 crc kubenswrapper[5001]: E0128 17:18:19.950450 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.450433728 +0000 UTC m=+146.618221958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.025132 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jk8v9" podStartSLOduration=120.025113326 podStartE2EDuration="2m0.025113326s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.016505266 +0000 UTC m=+146.184293496" watchObservedRunningTime="2026-01-28 17:18:20.025113326 +0000 UTC m=+146.192901556" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.026687 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl"] Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.051120 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.051496 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.55148004 +0000 UTC m=+146.719268270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.107174 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk"] Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.111708 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" podStartSLOduration=120.111688719 podStartE2EDuration="2m0.111688719s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.108100897 +0000 UTC m=+146.275889127" watchObservedRunningTime="2026-01-28 17:18:20.111688719 +0000 UTC m=+146.279476959" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.113996 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg"] Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.154485 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.155177 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.65515986 +0000 UTC m=+146.822948090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.196955 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4xbj9" podStartSLOduration=120.196940028 podStartE2EDuration="2m0.196940028s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.162993641 +0000 UTC m=+146.330781881" watchObservedRunningTime="2026-01-28 17:18:20.196940028 +0000 UTC m=+146.364728258" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.267609 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.267872 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.767852061 +0000 UTC m=+146.935640301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: W0128 17:18:20.393830 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70839fe8_b107_4323_a8b6_824e154cd3d8.slice/crio-0f0e2f747dc786f65b6d8b98001fb602594e5cb4d2b398107286eb43a9193d4c WatchSource:0}: Error finding container 0f0e2f747dc786f65b6d8b98001fb602594e5cb4d2b398107286eb43a9193d4c: Status 404 returned error can't find the container with id 0f0e2f747dc786f65b6d8b98001fb602594e5cb4d2b398107286eb43a9193d4c Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.399434 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.399901 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:20.899887285 +0000 UTC m=+147.067675515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.482326 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" podStartSLOduration=120.482301042 podStartE2EDuration="2m0.482301042s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.471064905 +0000 UTC m=+146.638853135" watchObservedRunningTime="2026-01-28 17:18:20.482301042 +0000 UTC m=+146.650089272" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.500423 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.500741 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.000726703 +0000 UTC m=+147.168514933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.528862 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dww9c" event={"ID":"d46a8dca-7cc0-48b4-8626-6a516d1f502e","Type":"ContainerStarted","Data":"828eaa1878d560555175286df8c973a5f79f587a1647a3c62acca2306bd41567"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.528917 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dww9c" event={"ID":"d46a8dca-7cc0-48b4-8626-6a516d1f502e","Type":"ContainerStarted","Data":"91c904f05ae3faa06b9d8f5c984a236dd103a5d81a1c72177ed2c504ddf27f61"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.533526 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkfqd" event={"ID":"bb1a701d-2e02-47ef-a368-51a1bc211dda","Type":"ContainerStarted","Data":"8285d42c15b2ee8fa1b4e2dcc4ae0615a2a1c1b70948c5414690614aabcad20e"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.535748 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" event={"ID":"bb5bef45-2b4e-435c-aa48-799bb3421892","Type":"ContainerStarted","Data":"cbeefd142f0399cf78e1a837bac138fe6206067679b8065dc832b6583cc33c58"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.539221 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" event={"ID":"7c72b1ab-baa0-45ee-a130-ccebefc3d437","Type":"ContainerStarted","Data":"e8e34a9f4dc5339808bf07cc6f3997e8393cd680db44919e2617c3c5b20be74b"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.539313 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" event={"ID":"7c72b1ab-baa0-45ee-a130-ccebefc3d437","Type":"ContainerStarted","Data":"1a801ca17ed201d3dd62e0d088f6f6d5593f2f4f2c092f131cb44af6aaa992db"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.539328 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" event={"ID":"7c72b1ab-baa0-45ee-a130-ccebefc3d437","Type":"ContainerStarted","Data":"73ff48a9888946dc2095bf1096fd47896fe79df3856eddebb11606d328c717ec"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.539738 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.544144 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" event={"ID":"70839fe8-b107-4323-a8b6-824e154cd3d8","Type":"ContainerStarted","Data":"0f0e2f747dc786f65b6d8b98001fb602594e5cb4d2b398107286eb43a9193d4c"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.549312 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" event={"ID":"2e8c775a-c533-4a4c-8346-a0a1e346e873","Type":"ContainerStarted","Data":"61802ef3411b4138817daed356cc1e049b7dd6f66aa4767b41ffed060e507e45"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.567468 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" event={"ID":"95832a2d-7e40-4e03-a731-2c8ed45384b4","Type":"ContainerStarted","Data":"7fca5dc23ec09f553e21167010c72691990fc4008186212c180f72bc9d81b629"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.573896 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" event={"ID":"c14628eb-e612-43d2-b299-88ebb92f22a0","Type":"ContainerStarted","Data":"2cd25bfbd0371956891c207b3e0575bb1876a81025d566f875d4fcf852417f5d"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.606998 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.607323 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.107311487 +0000 UTC m=+147.275099717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.651921 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.651993 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.652018 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" event={"ID":"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e","Type":"ContainerStarted","Data":"08e75c7b1bbb1ecdc1089f8362762aae879be7cb50c3d44c4382c5ed52e72480"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.652038 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" event={"ID":"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e","Type":"ContainerStarted","Data":"3e8f0e0082effeee5b0140c5f9c0998c57a35e05c20cb18bb89f1b48dee986ba"} Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.687026 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r5w7n" podStartSLOduration=120.687009524 podStartE2EDuration="2m0.687009524s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.685877685 +0000 UTC m=+146.853665905" watchObservedRunningTime="2026-01-28 17:18:20.687009524 +0000 UTC m=+146.854797754" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.712899 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.717163 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.217136544 +0000 UTC m=+147.384924774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.723176 5001 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2zkdv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.723227 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.770225 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.770270 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.818739 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.821152 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.320855845 +0000 UTC m=+147.488644075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.923671 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:20 crc kubenswrapper[5001]: E0128 17:18:20.924197 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.424177356 +0000 UTC m=+147.591965586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.989068 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" podStartSLOduration=120.989052764 podStartE2EDuration="2m0.989052764s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.945278935 +0000 UTC m=+147.113067165" watchObservedRunningTime="2026-01-28 17:18:20.989052764 +0000 UTC m=+147.156840994" Jan 28 17:18:20 crc kubenswrapper[5001]: I0128 17:18:20.991149 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-svm8v" podStartSLOduration=120.991139537 podStartE2EDuration="2m0.991139537s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:20.987428892 +0000 UTC m=+147.155217122" watchObservedRunningTime="2026-01-28 17:18:20.991139537 +0000 UTC m=+147.158927767" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.013259 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.025633 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.025929 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.525918226 +0000 UTC m=+147.693706456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.053436 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mc9xw" podStartSLOduration=121.053420909 podStartE2EDuration="2m1.053420909s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.051042038 +0000 UTC m=+147.218830268" watchObservedRunningTime="2026-01-28 17:18:21.053420909 +0000 UTC m=+147.221209139" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.057181 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podStartSLOduration=121.057164555 podStartE2EDuration="2m1.057164555s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.01354749 +0000 UTC m=+147.181335720" watchObservedRunningTime="2026-01-28 17:18:21.057164555 +0000 UTC m=+147.224952795" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.133114 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.133830 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.633812554 +0000 UTC m=+147.801600784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.194416 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" podStartSLOduration=121.194394132 podStartE2EDuration="2m1.194394132s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.166790287 +0000 UTC m=+147.334578517" watchObservedRunningTime="2026-01-28 17:18:21.194394132 +0000 UTC m=+147.362182362" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.195548 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.197922 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l4jqg" podStartSLOduration=121.197910722 podStartE2EDuration="2m1.197910722s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.19039061 +0000 UTC m=+147.358178840" watchObservedRunningTime="2026-01-28 17:18:21.197910722 +0000 UTC m=+147.365698952" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.198998 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.211815 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.234720 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.235183 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.735167624 +0000 UTC m=+147.902955854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.327755 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x5hd4"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.336219 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.336606 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.836588365 +0000 UTC m=+148.004376595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.348194 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.359683 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t4nnb" podStartSLOduration=121.359663175 podStartE2EDuration="2m1.359663175s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.35710593 +0000 UTC m=+147.524894170" watchObservedRunningTime="2026-01-28 17:18:21.359663175 +0000 UTC m=+147.527451405" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.416802 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-76gt6" podStartSLOduration=121.416780195 podStartE2EDuration="2m1.416780195s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.40564298 +0000 UTC m=+147.573431220" watchObservedRunningTime="2026-01-28 17:18:21.416780195 +0000 UTC m=+147.584568425" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.437765 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.438167 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:21.938152921 +0000 UTC m=+148.105941151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.453335 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.456715 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mwmbl"] Jan 28 17:18:21 crc kubenswrapper[5001]: W0128 17:18:21.484061 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod575106e1_5f5b_4e85_973b_8102a88f91b5.slice/crio-9203ff1c314c94cc76662b00873e212ff85c62ef862f747bea4b96be173a92ad WatchSource:0}: Error finding container 9203ff1c314c94cc76662b00873e212ff85c62ef862f747bea4b96be173a92ad: Status 404 returned error can't find the container with id 9203ff1c314c94cc76662b00873e212ff85c62ef862f747bea4b96be173a92ad Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.492608 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.513049 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9zxnt" podStartSLOduration=121.513029875 podStartE2EDuration="2m1.513029875s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.512010859 +0000 UTC m=+147.679799089" watchObservedRunningTime="2026-01-28 17:18:21.513029875 +0000 UTC m=+147.680818105" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.529164 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wx758"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.544360 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dkz7l" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.546670 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.548057 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.04803623 +0000 UTC m=+148.215824460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.558799 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.558873 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.558970 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.559018 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.559110 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.565832 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.566368 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.066351618 +0000 UTC m=+148.234139848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: W0128 17:18:21.571414 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a92c043_2e58_4a72_9ecb_024736e0ff21.slice/crio-dea048c8a924465074e56188b413eafea540eda3bbb86cf57c741c844b7e3799 WatchSource:0}: Error finding container dea048c8a924465074e56188b413eafea540eda3bbb86cf57c741c844b7e3799: Status 404 returned error can't find the container with id dea048c8a924465074e56188b413eafea540eda3bbb86cf57c741c844b7e3799 Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.579042 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.589199 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.601742 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.613249 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.613638 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.616691 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8vjx"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.623232 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.626103 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.626835 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2"] Jan 28 17:18:21 crc kubenswrapper[5001]: W0128 17:18:21.627172 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod291de1f0_1f01_45e9_bdbe_5d7eb9e081ba.slice/crio-870596c807cb6addb5b148ebbb660c583b19b182fad062c40262a3c026986510 WatchSource:0}: Error finding container 870596c807cb6addb5b148ebbb660c583b19b182fad062c40262a3c026986510: Status 404 returned error can't find the container with id 870596c807cb6addb5b148ebbb660c583b19b182fad062c40262a3c026986510 Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.631556 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" event={"ID":"82eb98e0-1282-4f33-827b-4813c7399230","Type":"ContainerStarted","Data":"8004304a8fd825cbb0c8e937f02174f153b4ff34ac258a5426c0f478effad36c"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.639053 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:21 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:21 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:21 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.639118 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.657220 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.666009 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.666800 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv"] Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.667602 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.167557995 +0000 UTC m=+148.335346235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.675717 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkfqd" event={"ID":"bb1a701d-2e02-47ef-a368-51a1bc211dda","Type":"ContainerStarted","Data":"a5e0e573872c4de8cc5f505e77d4a94ca9c8b1c7c455b02d69d7f1956b7ef36d"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.702752 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" event={"ID":"70839fe8-b107-4323-a8b6-824e154cd3d8","Type":"ContainerStarted","Data":"10ad95845c6078a3d5133b7c453bd52803e8d8388a058ac6cc08660cc106c9fd"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.714266 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.718848 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" podStartSLOduration=121.718826675 podStartE2EDuration="2m1.718826675s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.6568195 +0000 UTC m=+147.824607730" watchObservedRunningTime="2026-01-28 17:18:21.718826675 +0000 UTC m=+147.886614905" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.724463 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" podStartSLOduration=121.724440479 podStartE2EDuration="2m1.724440479s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.693103488 +0000 UTC m=+147.860891718" watchObservedRunningTime="2026-01-28 17:18:21.724440479 +0000 UTC m=+147.892228709" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.793406 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" event={"ID":"12f37af3-c4d6-4cb6-a079-493192562bfa","Type":"ContainerStarted","Data":"fd7fdc491a4e79d5c5af7d15a2f0fb9f2d355eb164cd77db410df012f005bff6"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.793596 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.794154 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.29413851 +0000 UTC m=+148.461926740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.803151 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.810855 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xg6k9"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.812338 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" podStartSLOduration=121.812318175 podStartE2EDuration="2m1.812318175s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.790838986 +0000 UTC m=+147.958627216" watchObservedRunningTime="2026-01-28 17:18:21.812318175 +0000 UTC m=+147.980106405" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.823785 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dww9c" podStartSLOduration=121.823762347 podStartE2EDuration="2m1.823762347s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.823659925 +0000 UTC m=+147.991448165" watchObservedRunningTime="2026-01-28 17:18:21.823762347 +0000 UTC m=+147.991550577" Jan 28 17:18:21 crc kubenswrapper[5001]: W0128 17:18:21.854875 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc003b05a_d6fd_4e04_9d75_5fa9551fc52b.slice/crio-672f1a7699c36b655d747238af94d7447592e511fe6f6fbd4e83c602efabd1d7 WatchSource:0}: Error finding container 672f1a7699c36b655d747238af94d7447592e511fe6f6fbd4e83c602efabd1d7: Status 404 returned error can't find the container with id 672f1a7699c36b655d747238af94d7447592e511fe6f6fbd4e83c602efabd1d7 Jan 28 17:18:21 crc kubenswrapper[5001]: W0128 17:18:21.856151 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a1510b0_adcb_4b34_b7ee_5ee11f4e7f7c.slice/crio-146f705ce211e718ba3ea72f76ad35dffc120dbec4b7bae52f7cafc50c44ed02 WatchSource:0}: Error finding container 146f705ce211e718ba3ea72f76ad35dffc120dbec4b7bae52f7cafc50c44ed02: Status 404 returned error can't find the container with id 146f705ce211e718ba3ea72f76ad35dffc120dbec4b7bae52f7cafc50c44ed02 Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.858426 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" event={"ID":"2e8c775a-c533-4a4c-8346-a0a1e346e873","Type":"ContainerStarted","Data":"18e33af4e946bd0fcf510716238c2227c2d4d8e0716a8b6e8d6bf0865dfe30c1"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.858467 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" event={"ID":"2e8c775a-c533-4a4c-8346-a0a1e346e873","Type":"ContainerStarted","Data":"08816ce4881894732025f6343049d3ad66d93c4f4e032aa2c8512f05da936a09"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.867878 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w84qz"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.871750 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" event={"ID":"cbcff487-3cc8-4e36-a9b9-edff4a99256f","Type":"ContainerStarted","Data":"f5de955944a4655a7c04fb76e7b9063727cb73210a4836fbafcfa1eac049711d"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.880612 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" event={"ID":"bb5bef45-2b4e-435c-aa48-799bb3421892","Type":"ContainerStarted","Data":"92ea8c0d16b82c42c6c63dcdb4bd6954d18287b7468a313e1ba4909cf89a92e3"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.881714 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.885161 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" event={"ID":"4fd6ffbb-add2-4c51-93a3-0ed4830085ef","Type":"ContainerStarted","Data":"88959ec80aba4d5cd092ca6e8bfbe4ede70d023fb1925ecae679bc0ed5414e43"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.901063 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:21 crc kubenswrapper[5001]: E0128 17:18:21.902029 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.402014137 +0000 UTC m=+148.569802367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.902553 5001 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vr6jl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.902585 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" podUID="bb5bef45-2b4e-435c-aa48-799bb3421892" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.903101 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" event={"ID":"d1ddd030-2b42-466e-aa16-73574e2b3233","Type":"ContainerStarted","Data":"520479d18526efe478c8db0c7b6ff6f92e57ba7d38c53701671faf003c232c64"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.903994 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" podStartSLOduration=121.903968137 podStartE2EDuration="2m1.903968137s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.903584457 +0000 UTC m=+148.071372697" watchObservedRunningTime="2026-01-28 17:18:21.903968137 +0000 UTC m=+148.071756367" Jan 28 17:18:21 crc kubenswrapper[5001]: W0128 17:18:21.931243 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a8f23f7_d3a0_4c33_9237_c00812de2229.slice/crio-6c170cb1ed683f101f4c889527fb684c6099d8eda6a62c90dc6f82ab425c4946 WatchSource:0}: Error finding container 6c170cb1ed683f101f4c889527fb684c6099d8eda6a62c90dc6f82ab425c4946: Status 404 returned error can't find the container with id 6c170cb1ed683f101f4c889527fb684c6099d8eda6a62c90dc6f82ab425c4946 Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.931742 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" event={"ID":"6a92c043-2e58-4a72-9ecb-024736e0ff21","Type":"ContainerStarted","Data":"dea048c8a924465074e56188b413eafea540eda3bbb86cf57c741c844b7e3799"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.950763 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gqvsz"] Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.975433 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" event={"ID":"95832a2d-7e40-4e03-a731-2c8ed45384b4","Type":"ContainerStarted","Data":"869fad33aefecff041977d67185ea48040a42a7846aca5520ca10d775e58d930"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.978274 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" event={"ID":"575106e1-5f5b-4e85-973b-8102a88f91b5","Type":"ContainerStarted","Data":"9203ff1c314c94cc76662b00873e212ff85c62ef862f747bea4b96be173a92ad"} Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.981327 5001 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2zkdv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.981369 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 28 17:18:21 crc kubenswrapper[5001]: I0128 17:18:21.982262 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kkfqd" podStartSLOduration=5.982234898 podStartE2EDuration="5.982234898s" podCreationTimestamp="2026-01-28 17:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:21.967651725 +0000 UTC m=+148.135439955" watchObservedRunningTime="2026-01-28 17:18:21.982234898 +0000 UTC m=+148.150023128" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.000521 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-d8rr7" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.005165 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.011591 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.511563307 +0000 UTC m=+148.679351537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.053367 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n8khk" podStartSLOduration=122.053341535 podStartE2EDuration="2m2.053341535s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:22.005091372 +0000 UTC m=+148.172879612" watchObservedRunningTime="2026-01-28 17:18:22.053341535 +0000 UTC m=+148.221129765" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.073801 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdnlg" podStartSLOduration=122.073772687 podStartE2EDuration="2m2.073772687s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:22.073390897 +0000 UTC m=+148.241179137" watchObservedRunningTime="2026-01-28 17:18:22.073772687 +0000 UTC m=+148.241560917" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.106827 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.107009 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.606969716 +0000 UTC m=+148.774757946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.107726 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.108301 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.608280179 +0000 UTC m=+148.776068469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.143173 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" podStartSLOduration=122.143154051 podStartE2EDuration="2m2.143154051s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:22.099180257 +0000 UTC m=+148.266968507" watchObservedRunningTime="2026-01-28 17:18:22.143154051 +0000 UTC m=+148.310942281" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.144088 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" podStartSLOduration=122.144082854 podStartE2EDuration="2m2.144082854s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:22.142597226 +0000 UTC m=+148.310385486" watchObservedRunningTime="2026-01-28 17:18:22.144082854 +0000 UTC m=+148.311871084" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.209302 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.209514 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.709485466 +0000 UTC m=+148.877273696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.209768 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.210181 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.710165683 +0000 UTC m=+148.877953913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.310688 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.310907 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.810882687 +0000 UTC m=+148.978670917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.311072 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.311407 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.81139209 +0000 UTC m=+148.979180320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.411616 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.411894 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:22.911874859 +0000 UTC m=+149.079663089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: W0128 17:18:22.433616 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a17a38136ee553602fa89f92f0c562f2cf836e8c1a05e08806dd6da8214ed9d0 WatchSource:0}: Error finding container a17a38136ee553602fa89f92f0c562f2cf836e8c1a05e08806dd6da8214ed9d0: Status 404 returned error can't find the container with id a17a38136ee553602fa89f92f0c562f2cf836e8c1a05e08806dd6da8214ed9d0 Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.513259 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.513590 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.013572868 +0000 UTC m=+149.181361098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.598020 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:22 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:22 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:22 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.598075 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.614996 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.615347 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.115326299 +0000 UTC m=+149.283114549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.707889 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.707938 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.716844 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.717199 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.217184222 +0000 UTC m=+149.384972452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.721720 5001 patch_prober.go:28] interesting pod/apiserver-76f77b778f-kfhth container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.721772 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" podUID="95832a2d-7e40-4e03-a731-2c8ed45384b4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.817891 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.818065 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.31803972 +0000 UTC m=+149.485827950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.818465 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.818809 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.318797579 +0000 UTC m=+149.486585809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.826340 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.826674 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.919119 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.919263 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.419241237 +0000 UTC m=+149.587029467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.919757 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:22 crc kubenswrapper[5001]: E0128 17:18:22.920254 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.420238902 +0000 UTC m=+149.588027132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.983931 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" event={"ID":"458a9e44-85a3-452c-9bec-0fd8b09a9ba8","Type":"ContainerStarted","Data":"9defc8cd0ef88a43f6d5a90862d4c627782bfbd8ff42b06418e1246f11d163cb"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.985503 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" event={"ID":"d1ddd030-2b42-466e-aa16-73574e2b3233","Type":"ContainerStarted","Data":"be7c65614a53d0ae2360bf7d446cf3fda9c6e992e02ee3eed645479aa511c412"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.986829 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" event={"ID":"12f37af3-c4d6-4cb6-a079-493192562bfa","Type":"ContainerStarted","Data":"305b9a2bdef4f8d0d26d086d48d299100e4c225be2bc62548850f874bd954ba1"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.987902 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" event={"ID":"c003b05a-d6fd-4e04-9d75-5fa9551fc52b","Type":"ContainerStarted","Data":"672f1a7699c36b655d747238af94d7447592e511fe6f6fbd4e83c602efabd1d7"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.988883 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" event={"ID":"bc33c805-eeaf-40d2-977a-40c7fffc3b34","Type":"ContainerStarted","Data":"d1f8de24379055bb5cce5ddeb53fe897ed44138965dead3b999a24e4353b5cc1"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.990026 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" event={"ID":"4fd6ffbb-add2-4c51-93a3-0ed4830085ef","Type":"ContainerStarted","Data":"fdd106037cca9ca83528e20ae084342451348b9104a47b1795f023c19084b7af"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.990907 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" event={"ID":"cc174eee-37cc-4dca-9a65-fa0f48b80588","Type":"ContainerStarted","Data":"8affcbb04b5d3df7a88df9ac8e03a9f69c80bc139f8c4a76cedfce3ad9ab0a19"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.991822 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0aaac741b02ad620bf65a4a16cab5b5c1444f9d852a02cdd2db34d7ac203358b"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.992850 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" event={"ID":"82eb98e0-1282-4f33-827b-4813c7399230","Type":"ContainerStarted","Data":"9eeda667a34cf36141040a1af49e60e323d6eb770f96bbe5a2605acfed3715fb"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.993875 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" event={"ID":"e36eb22f-82ef-47d9-9418-ae58240aa597","Type":"ContainerStarted","Data":"0f9cb3785b49f6df86fdd370f9329e8967125acea89ae975e0b80e9af038e848"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.995004 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" event={"ID":"6a92c043-2e58-4a72-9ecb-024736e0ff21","Type":"ContainerStarted","Data":"405de87de5ed840d57728124435148c7f4615ef4dca8475e725b5f41c7b6bb0a"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.996060 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x5hd4" event={"ID":"cbcff487-3cc8-4e36-a9b9-edff4a99256f","Type":"ContainerStarted","Data":"cda5581f7fbaf081addcf8fe8f330e5ddffa2574bd5c1bb520a94f6d63624099"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.996856 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" event={"ID":"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba","Type":"ContainerStarted","Data":"870596c807cb6addb5b148ebbb660c583b19b182fad062c40262a3c026986510"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.998001 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c8d106a5c2d0d87e060f7b3119668e0c496d90a3121828723487e5538474a47d"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.998898 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" event={"ID":"e8253e7b-5b41-4131-827f-b7b31be8959a","Type":"ContainerStarted","Data":"c1b698867df20849607faec9517c11a525a7d18d80bc88ab8ddc149c2066bcfe"} Jan 28 17:18:22 crc kubenswrapper[5001]: I0128 17:18:22.999821 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a17a38136ee553602fa89f92f0c562f2cf836e8c1a05e08806dd6da8214ed9d0"} Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.000714 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" event={"ID":"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c","Type":"ContainerStarted","Data":"146f705ce211e718ba3ea72f76ad35dffc120dbec4b7bae52f7cafc50c44ed02"} Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.001521 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xg6k9" event={"ID":"2138a115-5bef-4076-894f-060817d5d343","Type":"ContainerStarted","Data":"aa612a711092a32ad03ed905a85095f36860db1baf7dcd021fe9a79e1162ea66"} Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.002311 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w84qz" event={"ID":"6a8f23f7-d3a0-4c33-9237-c00812de2229","Type":"ContainerStarted","Data":"6c170cb1ed683f101f4c889527fb684c6099d8eda6a62c90dc6f82ab425c4946"} Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.003561 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" event={"ID":"1f9807f0-3235-44a0-8fc2-ea33f4dfa778","Type":"ContainerStarted","Data":"1be18767bc6ef9152b656b46edaa90c0ed264850db23b01c5f7452a3adf5e287"} Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.012057 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vr6jl" Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.020850 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.021145 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.521128991 +0000 UTC m=+149.688917221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.021663 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.022663 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.52265365 +0000 UTC m=+149.690441880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.122321 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.122433 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.622411599 +0000 UTC m=+149.790199829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.122899 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.123278 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.623268111 +0000 UTC m=+149.791056341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.225214 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.225626 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.725607257 +0000 UTC m=+149.893395487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.326367 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.326863 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.826841134 +0000 UTC m=+149.994629364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.427902 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.428382 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.928348819 +0000 UTC m=+150.096137059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.428446 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.428766 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:23.928753649 +0000 UTC m=+150.096541879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.530038 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.530314 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.030294524 +0000 UTC m=+150.198082754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.530540 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.530874 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.030863719 +0000 UTC m=+150.198651949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.602773 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:23 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:23 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:23 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.602841 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.634504 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.634887 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.134867307 +0000 UTC m=+150.302655537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.736110 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.736742 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.236413663 +0000 UTC m=+150.404201893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.837124 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.837268 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.33724226 +0000 UTC m=+150.505030490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.837562 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.837942 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.337929428 +0000 UTC m=+150.505717648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.938469 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.938677 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.438645592 +0000 UTC m=+150.606433822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.938724 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:23 crc kubenswrapper[5001]: E0128 17:18:23.939138 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.439127514 +0000 UTC m=+150.606915834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:23 crc kubenswrapper[5001]: I0128 17:18:23.985241 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.008593 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" event={"ID":"e36eb22f-82ef-47d9-9418-ae58240aa597","Type":"ContainerStarted","Data":"299943f4b4553d6c507c33de182c84b0edf205074bbb837fd93e977dc74e3f10"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.008847 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.010060 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" event={"ID":"c003b05a-d6fd-4e04-9d75-5fa9551fc52b","Type":"ContainerStarted","Data":"0681f46ec7a7ff56ac866190a81414ec06c5d7c0aae425cd5a733abea4c33ef3"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.010517 5001 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xxzgv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.010556 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" podUID="e36eb22f-82ef-47d9-9418-ae58240aa597" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.013938 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" event={"ID":"7a1510b0-adcb-4b34-b7ee-5ee11f4e7f7c","Type":"ContainerStarted","Data":"b8a1ebf460877b9372a74550549bdaee6feeb50e1988207d218a164b20823476"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.034381 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" event={"ID":"458a9e44-85a3-452c-9bec-0fd8b09a9ba8","Type":"ContainerStarted","Data":"dfeb7b34728375b66f992e82a26d1a186150b8410bc6648363f842d8f2c63622"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.040078 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.040193 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.540167937 +0000 UTC m=+150.707956167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.040374 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.040924 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.540913656 +0000 UTC m=+150.708701886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.057616 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" event={"ID":"cc174eee-37cc-4dca-9a65-fa0f48b80588","Type":"ContainerStarted","Data":"0da4e599739ff920427a98d19ab0b2c25d070f3919d0f1a20e36d1f507838ee3"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.058713 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.064927 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" podStartSLOduration=124.064905789 podStartE2EDuration="2m4.064905789s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.062476357 +0000 UTC m=+150.230264607" watchObservedRunningTime="2026-01-28 17:18:24.064905789 +0000 UTC m=+150.232694019" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.065896 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" event={"ID":"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba","Type":"ContainerStarted","Data":"6f609bea0d2eafed8c279d7ab6b1c6736ea810bc6c9e81ff5b3100fbca4b7a32"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.081737 5001 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rhbss container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.082165 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" podUID="cc174eee-37cc-4dca-9a65-fa0f48b80588" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.113590 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w84qz" event={"ID":"6a8f23f7-d3a0-4c33-9237-c00812de2229","Type":"ContainerStarted","Data":"f97f2b5fcf0f21e74f5d7eebc03483b8388066c70cea72998cf1c374e845a61b"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.137311 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" event={"ID":"4fd6ffbb-add2-4c51-93a3-0ed4830085ef","Type":"ContainerStarted","Data":"10b584d3608de6c4bd2512f6b0e8aefa38a7049552c1eeec848924bae8e83f7a"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.142569 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.143449 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.643431486 +0000 UTC m=+150.811219716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.161860 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" event={"ID":"e8253e7b-5b41-4131-827f-b7b31be8959a","Type":"ContainerStarted","Data":"385ff8b7e69b0844b51aca060090503a9872bf34d4dbd54c400151ae0f1f7a8c"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.186253 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" event={"ID":"d1ddd030-2b42-466e-aa16-73574e2b3233","Type":"ContainerStarted","Data":"9fe4c5ee784c8c247abd1c92aa88930fa0e0038457dd8a499dd8979f45993d64"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.203411 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-z8vjx" podStartSLOduration=124.203391718 podStartE2EDuration="2m4.203391718s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.200765441 +0000 UTC m=+150.368553671" watchObservedRunningTime="2026-01-28 17:18:24.203391718 +0000 UTC m=+150.371179948" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.203623 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" podStartSLOduration=124.203617604 podStartE2EDuration="2m4.203617604s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.139523386 +0000 UTC m=+150.307311616" watchObservedRunningTime="2026-01-28 17:18:24.203617604 +0000 UTC m=+150.371405834" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.237960 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" event={"ID":"bc33c805-eeaf-40d2-977a-40c7fffc3b34","Type":"ContainerStarted","Data":"c06b681196cf41027112dcb7420423653a34d2312323b1d72cbcba2c228cb56f"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.244786 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.246581 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.746566632 +0000 UTC m=+150.914354862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.273190 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9sq5q" podStartSLOduration=124.273170792 podStartE2EDuration="2m4.273170792s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.272462124 +0000 UTC m=+150.440250354" watchObservedRunningTime="2026-01-28 17:18:24.273170792 +0000 UTC m=+150.440959022" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.274172 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xg6k9" event={"ID":"2138a115-5bef-4076-894f-060817d5d343","Type":"ContainerStarted","Data":"ccbcfe16d0f7f1b6909af18f65aca529ef4a8af458ffc5ba036fd17e55ec70ba"} Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.274193 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dvkvs" podStartSLOduration=124.274186518 podStartE2EDuration="2m4.274186518s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.252230847 +0000 UTC m=+150.420019077" watchObservedRunningTime="2026-01-28 17:18:24.274186518 +0000 UTC m=+150.441974758" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.284336 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9jrlf" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.305027 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4vmwl" podStartSLOduration=124.305007686 podStartE2EDuration="2m4.305007686s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.295624086 +0000 UTC m=+150.463412306" watchObservedRunningTime="2026-01-28 17:18:24.305007686 +0000 UTC m=+150.472795916" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.346200 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.347771 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" podStartSLOduration=124.347749918 podStartE2EDuration="2m4.347749918s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.324164885 +0000 UTC m=+150.491953115" watchObservedRunningTime="2026-01-28 17:18:24.347749918 +0000 UTC m=+150.515538148" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.347853 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.84783263 +0000 UTC m=+151.015620860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.376830 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zf27q" podStartSLOduration=124.376814741 podStartE2EDuration="2m4.376814741s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.345660035 +0000 UTC m=+150.513448265" watchObservedRunningTime="2026-01-28 17:18:24.376814741 +0000 UTC m=+150.544602971" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.446388 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fg7p5" podStartSLOduration=124.446372059 podStartE2EDuration="2m4.446372059s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.445222959 +0000 UTC m=+150.613011189" watchObservedRunningTime="2026-01-28 17:18:24.446372059 +0000 UTC m=+150.614160289" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.447651 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xg6k9" podStartSLOduration=8.447645091 podStartE2EDuration="8.447645091s" podCreationTimestamp="2026-01-28 17:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.41786018 +0000 UTC m=+150.585648410" watchObservedRunningTime="2026-01-28 17:18:24.447645091 +0000 UTC m=+150.615433321" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.448777 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.449182 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:24.94916686 +0000 UTC m=+151.116955090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.487066 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2h5l2" podStartSLOduration=124.487048818 podStartE2EDuration="2m4.487048818s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:24.485284213 +0000 UTC m=+150.653072463" watchObservedRunningTime="2026-01-28 17:18:24.487048818 +0000 UTC m=+150.654837048" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.550524 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.550688 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.050663344 +0000 UTC m=+151.218451574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.550840 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.551273 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.05125826 +0000 UTC m=+151.219046490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.607521 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:24 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:24 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:24 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.607839 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.652239 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.652396 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.152375424 +0000 UTC m=+151.320163644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.652873 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.653306 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.153292037 +0000 UTC m=+151.321080267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.754647 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.755136 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.255006527 +0000 UTC m=+151.422794747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.755391 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.755839 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.255824308 +0000 UTC m=+151.423612538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.856460 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.856685 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.356656015 +0000 UTC m=+151.524444245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.857063 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.857390 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.357377474 +0000 UTC m=+151.525165704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.958474 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.958744 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.458712513 +0000 UTC m=+151.626500753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:24 crc kubenswrapper[5001]: I0128 17:18:24.959139 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:24 crc kubenswrapper[5001]: E0128 17:18:24.959483 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.459470972 +0000 UTC m=+151.627259202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.060619 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.061181 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.561166851 +0000 UTC m=+151.728955081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.162727 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.163108 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.663095746 +0000 UTC m=+151.830883976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.264035 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.264198 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.76416814 +0000 UTC m=+151.931956380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.264375 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.264889 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.764854137 +0000 UTC m=+151.932642367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.281243 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a60fcc1c866d2b50a4abd81f6c9f2c6713c6d5a29d27c0303fbfdeba02b98d60"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.286064 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" event={"ID":"1f9807f0-3235-44a0-8fc2-ea33f4dfa778","Type":"ContainerStarted","Data":"9649fe0271857b370913f36f265fd5688e0d1f5273f4bf650b730c9ee8071bde"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.286113 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" event={"ID":"1f9807f0-3235-44a0-8fc2-ea33f4dfa778","Type":"ContainerStarted","Data":"2300ae23432a74ec9ee58b8d2c41a157e6cf9e4394a88e7a7710ef27a736ed11"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.288443 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" event={"ID":"c003b05a-d6fd-4e04-9d75-5fa9551fc52b","Type":"ContainerStarted","Data":"a45e7a15533c6bb0279b68b6b49042f779d667226a813c061be40310ec5ba550"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.291036 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" event={"ID":"291de1f0-1f01-45e9-bdbe-5d7eb9e081ba","Type":"ContainerStarted","Data":"00a7df42f401b5dc47785b5a2aacc5964e2cf40c042dcfd839df4be1eed328d0"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.296246 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8961c548e170b53bc35b4c16999704573921f2e393800fe5dfac6af9e77fc73f"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.298931 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w84qz" event={"ID":"6a8f23f7-d3a0-4c33-9237-c00812de2229","Type":"ContainerStarted","Data":"3ac2ea309e6ea949cd5ac31cfda0ab96fac71b16d72637693f9ab88af8daa112"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.299024 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.300283 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"35ac8192a9b3b80968b4270888c66e60d96c58b4af6265daed804e145cdd6fa8"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.301104 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.302931 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" event={"ID":"575106e1-5f5b-4e85-973b-8102a88f91b5","Type":"ContainerStarted","Data":"d15ddc2e916a93bf2db555eab0db95219edfdb4ef53af45e534522f0305e0a44"} Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.303577 5001 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rhbss container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.303625 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" podUID="cc174eee-37cc-4dca-9a65-fa0f48b80588" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.303744 5001 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xxzgv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.303989 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" podUID="e36eb22f-82ef-47d9-9418-ae58240aa597" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.325890 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ntx7" podStartSLOduration=125.325872947 podStartE2EDuration="2m5.325872947s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:25.323116806 +0000 UTC m=+151.490905036" watchObservedRunningTime="2026-01-28 17:18:25.325872947 +0000 UTC m=+151.493661177" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.365639 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.366960 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.866935086 +0000 UTC m=+152.034723366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.368523 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.370509 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.870493927 +0000 UTC m=+152.038282157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.389280 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fxfc9" podStartSLOduration=125.389254067 podStartE2EDuration="2m5.389254067s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:25.386191249 +0000 UTC m=+151.553979479" watchObservedRunningTime="2026-01-28 17:18:25.389254067 +0000 UTC m=+151.557042317" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.395517 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gqvsz" podStartSLOduration=125.395465956 podStartE2EDuration="2m5.395465956s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:25.350875406 +0000 UTC m=+151.518663636" watchObservedRunningTime="2026-01-28 17:18:25.395465956 +0000 UTC m=+151.563254186" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.447960 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w84qz" podStartSLOduration=9.447936777 podStartE2EDuration="9.447936777s" podCreationTimestamp="2026-01-28 17:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:25.447440124 +0000 UTC m=+151.615228354" watchObservedRunningTime="2026-01-28 17:18:25.447936777 +0000 UTC m=+151.615725007" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.476183 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.476608 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:25.976589299 +0000 UTC m=+152.144377529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.525570 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wx758" podStartSLOduration=125.52554464 podStartE2EDuration="2m5.52554464s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:25.521764504 +0000 UTC m=+151.689552734" watchObservedRunningTime="2026-01-28 17:18:25.52554464 +0000 UTC m=+151.693332880" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.578507 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.579206 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.079191702 +0000 UTC m=+152.246979942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.607257 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:25 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:25 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:25 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.608245 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.679242 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.679523 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.179491865 +0000 UTC m=+152.347280105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.679890 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.680301 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.180288945 +0000 UTC m=+152.348077245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.781018 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.781228 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.281204335 +0000 UTC m=+152.448992565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.781325 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.781823 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.28181337 +0000 UTC m=+152.449601610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.882143 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.882680 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.382663338 +0000 UTC m=+152.550451568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:25 crc kubenswrapper[5001]: I0128 17:18:25.983714 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:25 crc kubenswrapper[5001]: E0128 17:18:25.984084 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.48406682 +0000 UTC m=+152.651855050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.084677 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.085116 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.585096322 +0000 UTC m=+152.752884562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.185729 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.186104 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.686090863 +0000 UTC m=+152.853879093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.286541 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.286755 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.786725095 +0000 UTC m=+152.954513335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.286853 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.287226 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.787215618 +0000 UTC m=+152.955003928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.329506 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" event={"ID":"575106e1-5f5b-4e85-973b-8102a88f91b5","Type":"ContainerStarted","Data":"126b203a1473a51d5f34316ea56bd39e7017ac84c0fcf5efcdb1c350d3e51ae9"} Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.334463 5001 generic.go:334] "Generic (PLEG): container finished" podID="6a92c043-2e58-4a72-9ecb-024736e0ff21" containerID="405de87de5ed840d57728124435148c7f4615ef4dca8475e725b5f41c7b6bb0a" exitCode=0 Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.335354 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" event={"ID":"6a92c043-2e58-4a72-9ecb-024736e0ff21","Type":"ContainerDied","Data":"405de87de5ed840d57728124435148c7f4615ef4dca8475e725b5f41c7b6bb0a"} Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.388341 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.388503 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.888476946 +0000 UTC m=+153.056265176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.390451 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.391243 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.891226336 +0000 UTC m=+153.059014656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.470880 5001 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.491617 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.491783 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.991759536 +0000 UTC m=+153.159547786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.491891 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.492274 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:26.992264419 +0000 UTC m=+153.160052649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.593170 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.593502 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:27.093484306 +0000 UTC m=+153.261272536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.600224 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:26 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:26 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:26 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.600282 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.695092 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.695494 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:27.195474193 +0000 UTC m=+153.363262423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.752832 5001 patch_prober.go:28] interesting pod/downloads-7954f5f757-9zxnt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.752898 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9zxnt" podUID="881bc101-23c7-42c2-b4b9-b9983d9d4b1c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.753276 5001 patch_prober.go:28] interesting pod/downloads-7954f5f757-9zxnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.753331 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9zxnt" podUID="881bc101-23c7-42c2-b4b9-b9983d9d4b1c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.797132 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.797355 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 17:18:27.297326316 +0000 UTC m=+153.465114546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.797562 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:26 crc kubenswrapper[5001]: E0128 17:18:26.797943 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 17:18:27.297928971 +0000 UTC m=+153.465717201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5ns7t" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.800339 5001 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T17:18:26.471129439Z","Handler":null,"Name":""} Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.806692 5001 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.806734 5001 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.838435 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rhbss" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.898521 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.911615 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.929565 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.929622 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.931795 5001 patch_prober.go:28] interesting pod/console-f9d7485db-4xbj9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.931870 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4xbj9" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 17:18:26 crc kubenswrapper[5001]: I0128 17:18:26.952874 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.000675 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.013206 5001 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.013247 5001 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.061679 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5ns7t\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.236626 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnl7x"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.237892 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.239965 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.247350 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnl7x"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.283552 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.304229 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-utilities\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.304520 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzzb8\" (UniqueName: \"kubernetes.io/projected/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-kube-api-access-kzzb8\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.304545 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-catalog-content\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.346179 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" event={"ID":"575106e1-5f5b-4e85-973b-8102a88f91b5","Type":"ContainerStarted","Data":"d4f43e1d3110f39ee20b625a8092d9bd1fe9c7d6938378536cc4ed6775c26ed8"} Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.346262 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" event={"ID":"575106e1-5f5b-4e85-973b-8102a88f91b5","Type":"ContainerStarted","Data":"2ea1006b2f14d99b1ca1880f6b3255d61ebcc7d53d729ff9fd6a6869f8cf4758"} Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.370549 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" podStartSLOduration=11.370534016 podStartE2EDuration="11.370534016s" podCreationTimestamp="2026-01-28 17:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:27.367629022 +0000 UTC m=+153.535417252" watchObservedRunningTime="2026-01-28 17:18:27.370534016 +0000 UTC m=+153.538322236" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.405739 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-utilities\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.405790 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzzb8\" (UniqueName: \"kubernetes.io/projected/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-kube-api-access-kzzb8\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.405814 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-catalog-content\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.407086 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-catalog-content\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.407342 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-utilities\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.436997 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzzb8\" (UniqueName: \"kubernetes.io/projected/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-kube-api-access-kzzb8\") pod \"certified-operators-rnl7x\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.444215 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gsncd"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.445779 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.449724 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.455525 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsncd"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.507437 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-utilities\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.507507 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-catalog-content\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.507529 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rx8\" (UniqueName: \"kubernetes.io/projected/68df3eed-9a6f-4127-ac82-a61ae7216062-kube-api-access-x7rx8\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.563634 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.597352 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:27 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:27 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:27 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.597594 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.606319 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.608607 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-catalog-content\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.608643 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rx8\" (UniqueName: \"kubernetes.io/projected/68df3eed-9a6f-4127-ac82-a61ae7216062-kube-api-access-x7rx8\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.608703 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-utilities\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.609219 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-utilities\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.609722 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-catalog-content\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.630215 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rx8\" (UniqueName: \"kubernetes.io/projected/68df3eed-9a6f-4127-ac82-a61ae7216062-kube-api-access-x7rx8\") pod \"community-operators-gsncd\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.638242 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-btn8g"] Jan 28 17:18:27 crc kubenswrapper[5001]: E0128 17:18:27.639795 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a92c043-2e58-4a72-9ecb-024736e0ff21" containerName="collect-profiles" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.639826 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a92c043-2e58-4a72-9ecb-024736e0ff21" containerName="collect-profiles" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.639953 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a92c043-2e58-4a72-9ecb-024736e0ff21" containerName="collect-profiles" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.641450 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.642449 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btn8g"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709279 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5ns7t"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709305 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a92c043-2e58-4a72-9ecb-024736e0ff21-secret-volume\") pod \"6a92c043-2e58-4a72-9ecb-024736e0ff21\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709433 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a92c043-2e58-4a72-9ecb-024736e0ff21-config-volume\") pod \"6a92c043-2e58-4a72-9ecb-024736e0ff21\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709461 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6qbk\" (UniqueName: \"kubernetes.io/projected/6a92c043-2e58-4a72-9ecb-024736e0ff21-kube-api-access-x6qbk\") pod \"6a92c043-2e58-4a72-9ecb-024736e0ff21\" (UID: \"6a92c043-2e58-4a72-9ecb-024736e0ff21\") " Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709670 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-catalog-content\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709738 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-utilities\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.709812 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7kn\" (UniqueName: \"kubernetes.io/projected/0e4e8139-d262-4a83-aecc-f41c19d0c775-kube-api-access-ww7kn\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.712684 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a92c043-2e58-4a72-9ecb-024736e0ff21-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a92c043-2e58-4a72-9ecb-024736e0ff21" (UID: "6a92c043-2e58-4a72-9ecb-024736e0ff21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.717197 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.717692 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a92c043-2e58-4a72-9ecb-024736e0ff21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a92c043-2e58-4a72-9ecb-024736e0ff21" (UID: "6a92c043-2e58-4a72-9ecb-024736e0ff21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.718830 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a92c043-2e58-4a72-9ecb-024736e0ff21-kube-api-access-x6qbk" (OuterVolumeSpecName: "kube-api-access-x6qbk") pod "6a92c043-2e58-4a72-9ecb-024736e0ff21" (UID: "6a92c043-2e58-4a72-9ecb-024736e0ff21"). InnerVolumeSpecName "kube-api-access-x6qbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.725682 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-kfhth" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.770836 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810312 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnl7x"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810712 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-utilities\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810815 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7kn\" (UniqueName: \"kubernetes.io/projected/0e4e8139-d262-4a83-aecc-f41c19d0c775-kube-api-access-ww7kn\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810871 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-catalog-content\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810949 5001 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a92c043-2e58-4a72-9ecb-024736e0ff21-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810961 5001 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a92c043-2e58-4a72-9ecb-024736e0ff21-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.810985 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6qbk\" (UniqueName: \"kubernetes.io/projected/6a92c043-2e58-4a72-9ecb-024736e0ff21-kube-api-access-x6qbk\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.811945 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-utilities\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.813725 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-catalog-content\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.851160 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7kn\" (UniqueName: \"kubernetes.io/projected/0e4e8139-d262-4a83-aecc-f41c19d0c775-kube-api-access-ww7kn\") pod \"certified-operators-btn8g\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.852296 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-957th"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.853193 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.869054 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-957th"] Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.912742 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-utilities\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.913146 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-catalog-content\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.913173 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b77tb\" (UniqueName: \"kubernetes.io/projected/95c88444-d303-455f-b732-0e144a5f98e8-kube-api-access-b77tb\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:27 crc kubenswrapper[5001]: I0128 17:18:27.970668 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.014643 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-utilities\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.014680 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-catalog-content\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.014710 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b77tb\" (UniqueName: \"kubernetes.io/projected/95c88444-d303-455f-b732-0e144a5f98e8-kube-api-access-b77tb\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.015310 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-utilities\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.017236 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-catalog-content\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.026374 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsncd"] Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.033986 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b77tb\" (UniqueName: \"kubernetes.io/projected/95c88444-d303-455f-b732-0e144a5f98e8-kube-api-access-b77tb\") pod \"community-operators-957th\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.100350 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.100987 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.103346 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.103700 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.112776 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.184855 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-957th" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.217852 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcac22c8-8c2b-4efe-a199-301ffc981095-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.218002 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcac22c8-8c2b-4efe-a199-301ffc981095-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.233614 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btn8g"] Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.319194 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcac22c8-8c2b-4efe-a199-301ffc981095-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.319650 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcac22c8-8c2b-4efe-a199-301ffc981095-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.319276 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcac22c8-8c2b-4efe-a199-301ffc981095-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.356763 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcac22c8-8c2b-4efe-a199-301ffc981095-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.367044 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerStarted","Data":"79491c31565cee7aa3620bc2650420ca7b5676eca5ad7e8ed4865a9d714c74c1"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.367119 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerStarted","Data":"2eb82eec7d204d66c90ebf9db71d9001359711eb5886882fc334dd4b240ae7dc"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.372659 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btn8g" event={"ID":"0e4e8139-d262-4a83-aecc-f41c19d0c775","Type":"ContainerStarted","Data":"63b2c037b29f6d58a56e6c45f28559f595e953a34c239f9041407e116e456413"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.374452 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" event={"ID":"36cbdaab-10af-401c-8ec0-867a5e82dc3d","Type":"ContainerStarted","Data":"c044da6f9679c571c6ebed6ab4ef90f540290a41588e08fd0fa25fc30a8a7544"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.374480 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" event={"ID":"36cbdaab-10af-401c-8ec0-867a5e82dc3d","Type":"ContainerStarted","Data":"96df3d390b5ff2ac46519d8c61da211ae7d777ed41aea66429608a5ab59a68c7"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.374600 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.375501 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.376574 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" event={"ID":"6a92c043-2e58-4a72-9ecb-024736e0ff21","Type":"ContainerDied","Data":"dea048c8a924465074e56188b413eafea540eda3bbb86cf57c741c844b7e3799"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.376596 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea048c8a924465074e56188b413eafea540eda3bbb86cf57c741c844b7e3799" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.376686 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.380647 5001 generic.go:334] "Generic (PLEG): container finished" podID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerID="95b4e378147c98ebd50f79875305823eef4c18e1936d595ae1cced61e727cf69" exitCode=0 Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.380765 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnl7x" event={"ID":"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41","Type":"ContainerDied","Data":"95b4e378147c98ebd50f79875305823eef4c18e1936d595ae1cced61e727cf69"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.381021 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnl7x" event={"ID":"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41","Type":"ContainerStarted","Data":"56c2a4fe4a2429a55142dae7fe5800d7f8df1058a0c85da418001865f9c98127"} Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.421848 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" podStartSLOduration=128.421824857 podStartE2EDuration="2m8.421824857s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:28.420894633 +0000 UTC m=+154.588682863" watchObservedRunningTime="2026-01-28 17:18:28.421824857 +0000 UTC m=+154.589613087" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.433701 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.483796 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-957th"] Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.602317 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.632136 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:28 crc kubenswrapper[5001]: [-]has-synced failed: reason withheld Jan 28 17:18:28 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:28 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.632216 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.741380 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 17:18:28 crc kubenswrapper[5001]: W0128 17:18:28.748995 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddcac22c8_8c2b_4efe_a199_301ffc981095.slice/crio-c5e413783ff0d1aa8adb89c008ee3cd97946a6b82b3a98d2db275f19d0b1772b WatchSource:0}: Error finding container c5e413783ff0d1aa8adb89c008ee3cd97946a6b82b3a98d2db275f19d0b1772b: Status 404 returned error can't find the container with id c5e413783ff0d1aa8adb89c008ee3cd97946a6b82b3a98d2db275f19d0b1772b Jan 28 17:18:28 crc kubenswrapper[5001]: I0128 17:18:28.924151 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.237005 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bj9wh"] Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.238047 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: W0128 17:18:29.240419 5001 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: secrets "redhat-marketplace-dockercfg-x2ctb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 28 17:18:29 crc kubenswrapper[5001]: E0128 17:18:29.240463 5001 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-x2ctb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.252121 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj9wh"] Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.335867 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-utilities\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.335944 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-catalog-content\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.336010 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdqvh\" (UniqueName: \"kubernetes.io/projected/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-kube-api-access-kdqvh\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.388041 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dcac22c8-8c2b-4efe-a199-301ffc981095","Type":"ContainerStarted","Data":"c5e413783ff0d1aa8adb89c008ee3cd97946a6b82b3a98d2db275f19d0b1772b"} Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.389634 5001 generic.go:334] "Generic (PLEG): container finished" podID="95c88444-d303-455f-b732-0e144a5f98e8" containerID="c749ae26ae24929e25fef5722e99ab8e5821cf038c0374e20bc22287c37bdf5c" exitCode=0 Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.389684 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-957th" event={"ID":"95c88444-d303-455f-b732-0e144a5f98e8","Type":"ContainerDied","Data":"c749ae26ae24929e25fef5722e99ab8e5821cf038c0374e20bc22287c37bdf5c"} Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.389707 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-957th" event={"ID":"95c88444-d303-455f-b732-0e144a5f98e8","Type":"ContainerStarted","Data":"5349c02bc785e7d9eff9ad4bdb0e04704e37bcb0f6eff67f61061f762a2dc824"} Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.393017 5001 generic.go:334] "Generic (PLEG): container finished" podID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerID="79491c31565cee7aa3620bc2650420ca7b5676eca5ad7e8ed4865a9d714c74c1" exitCode=0 Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.393131 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerDied","Data":"79491c31565cee7aa3620bc2650420ca7b5676eca5ad7e8ed4865a9d714c74c1"} Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.396307 5001 generic.go:334] "Generic (PLEG): container finished" podID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerID="806d888cd1183837d04ad14e1d58055abdddfa8dfcb3ad387185fce838cf1345" exitCode=0 Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.396392 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btn8g" event={"ID":"0e4e8139-d262-4a83-aecc-f41c19d0c775","Type":"ContainerDied","Data":"806d888cd1183837d04ad14e1d58055abdddfa8dfcb3ad387185fce838cf1345"} Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.438295 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-utilities\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.438406 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-catalog-content\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.438452 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdqvh\" (UniqueName: \"kubernetes.io/projected/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-kube-api-access-kdqvh\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.440718 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-utilities\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.442221 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-catalog-content\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.488001 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdqvh\" (UniqueName: \"kubernetes.io/projected/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-kube-api-access-kdqvh\") pod \"redhat-marketplace-bj9wh\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.594340 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.597625 5001 patch_prober.go:28] interesting pod/router-default-5444994796-dww9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 17:18:29 crc kubenswrapper[5001]: [+]has-synced ok Jan 28 17:18:29 crc kubenswrapper[5001]: [+]process-running ok Jan 28 17:18:29 crc kubenswrapper[5001]: healthz check failed Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.597677 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dww9c" podUID="d46a8dca-7cc0-48b4-8626-6a516d1f502e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.607483 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xxzgv" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.635145 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwlzb"] Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.636405 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.646028 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwlzb"] Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.741194 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk7w8\" (UniqueName: \"kubernetes.io/projected/a7567e81-456f-4076-9d78-84e85d057dd4-kube-api-access-gk7w8\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.741296 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-catalog-content\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.741367 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-utilities\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.844458 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-catalog-content\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.844750 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-utilities\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.844945 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk7w8\" (UniqueName: \"kubernetes.io/projected/a7567e81-456f-4076-9d78-84e85d057dd4-kube-api-access-gk7w8\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.845113 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-catalog-content\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.845522 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-utilities\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:29 crc kubenswrapper[5001]: I0128 17:18:29.873684 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk7w8\" (UniqueName: \"kubernetes.io/projected/a7567e81-456f-4076-9d78-84e85d057dd4-kube-api-access-gk7w8\") pod \"redhat-marketplace-nwlzb\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.288004 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.291958 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.295411 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.403045 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dcac22c8-8c2b-4efe-a199-301ffc981095","Type":"ContainerStarted","Data":"97140774afbaaed583aa80ee4930cb06b24d96df38694cbada6584ea968725e2"} Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.583036 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj9wh"] Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.599091 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.607146 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dww9c" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.656702 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2mp88"] Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.657882 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.664902 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.682212 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mp88"] Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.731209 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.759022 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j942c\" (UniqueName: \"kubernetes.io/projected/c43a921e-0efa-4e2c-b425-21f7cd87a24b-kube-api-access-j942c\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.759129 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-utilities\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.759156 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-catalog-content\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.860244 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-utilities\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.860296 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-catalog-content\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.860336 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j942c\" (UniqueName: \"kubernetes.io/projected/c43a921e-0efa-4e2c-b425-21f7cd87a24b-kube-api-access-j942c\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.860968 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-utilities\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.861218 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-catalog-content\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.871950 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwlzb"] Jan 28 17:18:30 crc kubenswrapper[5001]: W0128 17:18:30.888759 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7567e81_456f_4076_9d78_84e85d057dd4.slice/crio-8a404721762399366ea1dbab58c0e1e43a12a469877c21e4b1a71ea16869486f WatchSource:0}: Error finding container 8a404721762399366ea1dbab58c0e1e43a12a469877c21e4b1a71ea16869486f: Status 404 returned error can't find the container with id 8a404721762399366ea1dbab58c0e1e43a12a469877c21e4b1a71ea16869486f Jan 28 17:18:30 crc kubenswrapper[5001]: I0128 17:18:30.889609 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j942c\" (UniqueName: \"kubernetes.io/projected/c43a921e-0efa-4e2c-b425-21f7cd87a24b-kube-api-access-j942c\") pod \"redhat-operators-2mp88\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.033919 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-97z2w"] Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.034999 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.072134 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.075596 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97z2w"] Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.173991 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-utilities\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.174071 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-catalog-content\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.174376 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqq2\" (UniqueName: \"kubernetes.io/projected/04b4625e-0b3f-44a9-b1a9-5855e74eef29-kube-api-access-hqqq2\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.283299 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-utilities\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.283725 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-catalog-content\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.283773 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqq2\" (UniqueName: \"kubernetes.io/projected/04b4625e-0b3f-44a9-b1a9-5855e74eef29-kube-api-access-hqqq2\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.284304 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-utilities\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.284566 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-catalog-content\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.318409 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqq2\" (UniqueName: \"kubernetes.io/projected/04b4625e-0b3f-44a9-b1a9-5855e74eef29-kube-api-access-hqqq2\") pod \"redhat-operators-97z2w\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.356219 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mp88"] Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.385700 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.412271 5001 generic.go:334] "Generic (PLEG): container finished" podID="dcac22c8-8c2b-4efe-a199-301ffc981095" containerID="97140774afbaaed583aa80ee4930cb06b24d96df38694cbada6584ea968725e2" exitCode=0 Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.412370 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dcac22c8-8c2b-4efe-a199-301ffc981095","Type":"ContainerDied","Data":"97140774afbaaed583aa80ee4930cb06b24d96df38694cbada6584ea968725e2"} Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.414951 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerStarted","Data":"b8bc4cf6157f57bc472a9a40eb83c0dd0eddd8637324ee1b377d4daadb163ed2"} Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.416523 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerStarted","Data":"8a404721762399366ea1dbab58c0e1e43a12a469877c21e4b1a71ea16869486f"} Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.418398 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerStarted","Data":"755ee0e3b6b2c2dae42728f12e425fe78519d06cf17071c80f41fc6a06c093aa"} Jan 28 17:18:31 crc kubenswrapper[5001]: I0128 17:18:31.654507 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97z2w"] Jan 28 17:18:31 crc kubenswrapper[5001]: W0128 17:18:31.665408 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04b4625e_0b3f_44a9_b1a9_5855e74eef29.slice/crio-33f3ecbbfb28458768f8f566cba0e4dbc462246462441026e402e55ead16021f WatchSource:0}: Error finding container 33f3ecbbfb28458768f8f566cba0e4dbc462246462441026e402e55ead16021f: Status 404 returned error can't find the container with id 33f3ecbbfb28458768f8f566cba0e4dbc462246462441026e402e55ead16021f Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.136129 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.137601 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.140841 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.142510 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.144792 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.300049 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.300141 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.401757 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.401870 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.401956 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.438819 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.452422 5001 generic.go:334] "Generic (PLEG): container finished" podID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerID="745ce5be4672d3adbde0786eab39d20b627ee4a25edf918853d83f74304b2360" exitCode=0 Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.452505 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerDied","Data":"745ce5be4672d3adbde0786eab39d20b627ee4a25edf918853d83f74304b2360"} Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.453830 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.459883 5001 generic.go:334] "Generic (PLEG): container finished" podID="a7567e81-456f-4076-9d78-84e85d057dd4" containerID="c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25" exitCode=0 Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.459943 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerDied","Data":"c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25"} Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.464290 5001 generic.go:334] "Generic (PLEG): container finished" podID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerID="a7cdfb80bf9d2bda1ce357068bcb82c6faa3fe3b6450361998102f28ab217842" exitCode=0 Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.464350 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerDied","Data":"a7cdfb80bf9d2bda1ce357068bcb82c6faa3fe3b6450361998102f28ab217842"} Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.466625 5001 generic.go:334] "Generic (PLEG): container finished" podID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerID="a6930246aa953025bf800fa9d6f4d92238388b004e8fb84d5dc49234b045cb05" exitCode=0 Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.467124 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerDied","Data":"a6930246aa953025bf800fa9d6f4d92238388b004e8fb84d5dc49234b045cb05"} Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.467199 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerStarted","Data":"33f3ecbbfb28458768f8f566cba0e4dbc462246462441026e402e55ead16021f"} Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.807669 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.809053 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcac22c8-8c2b-4efe-a199-301ffc981095-kube-api-access\") pod \"dcac22c8-8c2b-4efe-a199-301ffc981095\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.809127 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcac22c8-8c2b-4efe-a199-301ffc981095-kubelet-dir\") pod \"dcac22c8-8c2b-4efe-a199-301ffc981095\" (UID: \"dcac22c8-8c2b-4efe-a199-301ffc981095\") " Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.809244 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcac22c8-8c2b-4efe-a199-301ffc981095-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcac22c8-8c2b-4efe-a199-301ffc981095" (UID: "dcac22c8-8c2b-4efe-a199-301ffc981095"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.813151 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcac22c8-8c2b-4efe-a199-301ffc981095-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcac22c8-8c2b-4efe-a199-301ffc981095" (UID: "dcac22c8-8c2b-4efe-a199-301ffc981095"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.912630 5001 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcac22c8-8c2b-4efe-a199-301ffc981095-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:32 crc kubenswrapper[5001]: I0128 17:18:32.912667 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcac22c8-8c2b-4efe-a199-301ffc981095-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:33 crc kubenswrapper[5001]: I0128 17:18:33.037634 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 17:18:33 crc kubenswrapper[5001]: I0128 17:18:33.477762 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 17:18:33 crc kubenswrapper[5001]: I0128 17:18:33.477770 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dcac22c8-8c2b-4efe-a199-301ffc981095","Type":"ContainerDied","Data":"c5e413783ff0d1aa8adb89c008ee3cd97946a6b82b3a98d2db275f19d0b1772b"} Jan 28 17:18:33 crc kubenswrapper[5001]: I0128 17:18:33.478362 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e413783ff0d1aa8adb89c008ee3cd97946a6b82b3a98d2db275f19d0b1772b" Jan 28 17:18:33 crc kubenswrapper[5001]: I0128 17:18:33.480174 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0f39a3d9-b130-4a8a-a0ef-946d65a3c730","Type":"ContainerStarted","Data":"5aec343c1f79f526a9e0530689ff04be77fc5a9489649c22f6a20fcf86b8aea9"} Jan 28 17:18:34 crc kubenswrapper[5001]: I0128 17:18:34.531612 5001 generic.go:334] "Generic (PLEG): container finished" podID="0f39a3d9-b130-4a8a-a0ef-946d65a3c730" containerID="302520b95ae50de749e8be57dfc82de87fc7f882ad3e5c6678357cd46efa75a5" exitCode=0 Jan 28 17:18:34 crc kubenswrapper[5001]: I0128 17:18:34.531926 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0f39a3d9-b130-4a8a-a0ef-946d65a3c730","Type":"ContainerDied","Data":"302520b95ae50de749e8be57dfc82de87fc7f882ad3e5c6678357cd46efa75a5"} Jan 28 17:18:34 crc kubenswrapper[5001]: I0128 17:18:34.669291 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w84qz" Jan 28 17:18:34 crc kubenswrapper[5001]: I0128 17:18:34.837684 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:18:34 crc kubenswrapper[5001]: I0128 17:18:34.837752 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.818497 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.859693 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kubelet-dir\") pod \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.859835 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0f39a3d9-b130-4a8a-a0ef-946d65a3c730" (UID: "0f39a3d9-b130-4a8a-a0ef-946d65a3c730"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.859874 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kube-api-access\") pod \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\" (UID: \"0f39a3d9-b130-4a8a-a0ef-946d65a3c730\") " Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.860577 5001 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.865225 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0f39a3d9-b130-4a8a-a0ef-946d65a3c730" (UID: "0f39a3d9-b130-4a8a-a0ef-946d65a3c730"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:35 crc kubenswrapper[5001]: I0128 17:18:35.962112 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f39a3d9-b130-4a8a-a0ef-946d65a3c730-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.582187 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0f39a3d9-b130-4a8a-a0ef-946d65a3c730","Type":"ContainerDied","Data":"5aec343c1f79f526a9e0530689ff04be77fc5a9489649c22f6a20fcf86b8aea9"} Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.582232 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aec343c1f79f526a9e0530689ff04be77fc5a9489649c22f6a20fcf86b8aea9" Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.582242 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.752512 5001 patch_prober.go:28] interesting pod/downloads-7954f5f757-9zxnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.752564 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9zxnt" podUID="881bc101-23c7-42c2-b4b9-b9983d9d4b1c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.752964 5001 patch_prober.go:28] interesting pod/downloads-7954f5f757-9zxnt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.753054 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9zxnt" podUID="881bc101-23c7-42c2-b4b9-b9983d9d4b1c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.929697 5001 patch_prober.go:28] interesting pod/console-f9d7485db-4xbj9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 17:18:36 crc kubenswrapper[5001]: I0128 17:18:36.929761 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4xbj9" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.559715 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.565761 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b5caa8d-b144-45a6-b334-e9e77c13064d-metrics-certs\") pod \"network-metrics-daemon-rnn76\" (UID: \"2b5caa8d-b144-45a6-b334-e9e77c13064d\") " pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.685559 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rnn76" Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.742738 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tv6fl"] Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.743431 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" containerID="cri-o://1853c48e1eaea7acc93da29ac5b2ac4d9eaf7d705635c81af9e03c2b1c1e24d9" gracePeriod=30 Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.761577 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v"] Jan 28 17:18:42 crc kubenswrapper[5001]: I0128 17:18:42.761785 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" containerID="cri-o://ce9015950af2499718d12c440724dc51f06b96becc67eeb052377cd9ea3a8e24" gracePeriod=30 Jan 28 17:18:44 crc kubenswrapper[5001]: I0128 17:18:44.681647 5001 generic.go:334] "Generic (PLEG): container finished" podID="44f484a0-a976-4e56-82d9-84d8953664db" containerID="1853c48e1eaea7acc93da29ac5b2ac4d9eaf7d705635c81af9e03c2b1c1e24d9" exitCode=0 Jan 28 17:18:44 crc kubenswrapper[5001]: I0128 17:18:44.681716 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" event={"ID":"44f484a0-a976-4e56-82d9-84d8953664db","Type":"ContainerDied","Data":"1853c48e1eaea7acc93da29ac5b2ac4d9eaf7d705635c81af9e03c2b1c1e24d9"} Jan 28 17:18:44 crc kubenswrapper[5001]: I0128 17:18:44.683164 5001 generic.go:334] "Generic (PLEG): container finished" podID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerID="ce9015950af2499718d12c440724dc51f06b96becc67eeb052377cd9ea3a8e24" exitCode=0 Jan 28 17:18:44 crc kubenswrapper[5001]: I0128 17:18:44.683208 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" event={"ID":"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3","Type":"ContainerDied","Data":"ce9015950af2499718d12c440724dc51f06b96becc67eeb052377cd9ea3a8e24"} Jan 28 17:18:46 crc kubenswrapper[5001]: I0128 17:18:46.758327 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9zxnt" Jan 28 17:18:46 crc kubenswrapper[5001]: I0128 17:18:46.933336 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:46 crc kubenswrapper[5001]: I0128 17:18:46.936633 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:18:46 crc kubenswrapper[5001]: I0128 17:18:46.948885 5001 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tv6fl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 28 17:18:46 crc kubenswrapper[5001]: I0128 17:18:46.948936 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 28 17:18:47 crc kubenswrapper[5001]: I0128 17:18:47.290327 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:18:47 crc kubenswrapper[5001]: I0128 17:18:47.747525 5001 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8kk4v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 17:18:47 crc kubenswrapper[5001]: I0128 17:18:47.747660 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 17:18:56 crc kubenswrapper[5001]: I0128 17:18:56.949663 5001 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tv6fl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 28 17:18:56 crc kubenswrapper[5001]: I0128 17:18:56.950264 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 28 17:18:57 crc kubenswrapper[5001]: I0128 17:18:57.747842 5001 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8kk4v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 17:18:57 crc kubenswrapper[5001]: I0128 17:18:57.747896 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 17:18:58 crc kubenswrapper[5001]: I0128 17:18:58.918248 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jzg7q" Jan 28 17:19:01 crc kubenswrapper[5001]: I0128 17:19:01.637368 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 17:19:04 crc kubenswrapper[5001]: I0128 17:19:04.834548 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:19:04 crc kubenswrapper[5001]: I0128 17:19:04.834813 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.538579 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 17:19:07 crc kubenswrapper[5001]: E0128 17:19:07.539288 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcac22c8-8c2b-4efe-a199-301ffc981095" containerName="pruner" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.539306 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcac22c8-8c2b-4efe-a199-301ffc981095" containerName="pruner" Jan 28 17:19:07 crc kubenswrapper[5001]: E0128 17:19:07.539324 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f39a3d9-b130-4a8a-a0ef-946d65a3c730" containerName="pruner" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.539333 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f39a3d9-b130-4a8a-a0ef-946d65a3c730" containerName="pruner" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.539602 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f39a3d9-b130-4a8a-a0ef-946d65a3c730" containerName="pruner" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.539640 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcac22c8-8c2b-4efe-a199-301ffc981095" containerName="pruner" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.540653 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.545481 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.546740 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.548797 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.658555 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38583aed-ca40-4a55-8f7e-4695e7117543-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.658799 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38583aed-ca40-4a55-8f7e-4695e7117543-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.760421 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38583aed-ca40-4a55-8f7e-4695e7117543-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.760494 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38583aed-ca40-4a55-8f7e-4695e7117543-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.760569 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38583aed-ca40-4a55-8f7e-4695e7117543-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.791368 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38583aed-ca40-4a55-8f7e-4695e7117543-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.862754 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.949811 5001 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tv6fl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 17:19:07 crc kubenswrapper[5001]: I0128 17:19:07.949921 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 17:19:08 crc kubenswrapper[5001]: I0128 17:19:08.747678 5001 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8kk4v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 17:19:08 crc kubenswrapper[5001]: I0128 17:19:08.747831 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 17:19:12 crc kubenswrapper[5001]: I0128 17:19:12.441552 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mwmbl" podUID="575106e1-5f5b-4e85-973b-8102a88f91b5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:19:12 crc kubenswrapper[5001]: I0128 17:19:12.929733 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 17:19:12 crc kubenswrapper[5001]: I0128 17:19:12.930381 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:12 crc kubenswrapper[5001]: I0128 17:19:12.938704 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.026645 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-var-lock\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.026857 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.026919 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kube-api-access\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.129287 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.129345 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kube-api-access\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.129455 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-var-lock\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.129442 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.129585 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-var-lock\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.154217 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kube-api-access\") pod \"installer-9-crc\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:13 crc kubenswrapper[5001]: I0128 17:19:13.262512 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.461788 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.472878 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.501941 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4"] Jan 28 17:19:14 crc kubenswrapper[5001]: E0128 17:19:14.502473 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.502579 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" Jan 28 17:19:14 crc kubenswrapper[5001]: E0128 17:19:14.502671 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.502751 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.502963 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" containerName="route-controller-manager" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.503081 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f484a0-a976-4e56-82d9-84d8953664db" containerName="controller-manager" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.503645 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.512960 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4"] Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520370 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5v5\" (UniqueName: \"kubernetes.io/projected/44f484a0-a976-4e56-82d9-84d8953664db-kube-api-access-xg5v5\") pod \"44f484a0-a976-4e56-82d9-84d8953664db\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520414 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-config\") pod \"44f484a0-a976-4e56-82d9-84d8953664db\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520446 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwg6\" (UniqueName: \"kubernetes.io/projected/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-kube-api-access-6zwg6\") pod \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520483 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-config\") pod \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520512 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-client-ca\") pod \"44f484a0-a976-4e56-82d9-84d8953664db\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520537 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f484a0-a976-4e56-82d9-84d8953664db-serving-cert\") pod \"44f484a0-a976-4e56-82d9-84d8953664db\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520564 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-proxy-ca-bundles\") pod \"44f484a0-a976-4e56-82d9-84d8953664db\" (UID: \"44f484a0-a976-4e56-82d9-84d8953664db\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520617 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-serving-cert\") pod \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520648 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-client-ca\") pod \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\" (UID: \"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3\") " Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520763 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-client-ca\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520816 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-serving-cert\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520845 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-kube-api-access-wqlvh\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.520869 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-config\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.527352 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "44f484a0-a976-4e56-82d9-84d8953664db" (UID: "44f484a0-a976-4e56-82d9-84d8953664db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.528155 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f484a0-a976-4e56-82d9-84d8953664db-kube-api-access-xg5v5" (OuterVolumeSpecName: "kube-api-access-xg5v5") pod "44f484a0-a976-4e56-82d9-84d8953664db" (UID: "44f484a0-a976-4e56-82d9-84d8953664db"). InnerVolumeSpecName "kube-api-access-xg5v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.528737 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-client-ca" (OuterVolumeSpecName: "client-ca") pod "44f484a0-a976-4e56-82d9-84d8953664db" (UID: "44f484a0-a976-4e56-82d9-84d8953664db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.528862 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-config" (OuterVolumeSpecName: "config") pod "44f484a0-a976-4e56-82d9-84d8953664db" (UID: "44f484a0-a976-4e56-82d9-84d8953664db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.531750 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-config" (OuterVolumeSpecName: "config") pod "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" (UID: "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.531738 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" (UID: "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.532322 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f484a0-a976-4e56-82d9-84d8953664db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44f484a0-a976-4e56-82d9-84d8953664db" (UID: "44f484a0-a976-4e56-82d9-84d8953664db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.532428 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-kube-api-access-6zwg6" (OuterVolumeSpecName: "kube-api-access-6zwg6") pod "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" (UID: "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3"). InnerVolumeSpecName "kube-api-access-6zwg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.532728 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" (UID: "5c087dd8-02f6-4d1b-bb5f-3a624b0210b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621261 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-client-ca\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621365 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-serving-cert\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621397 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-kube-api-access-wqlvh\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621424 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-config\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621476 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621489 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621498 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44f484a0-a976-4e56-82d9-84d8953664db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621506 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621517 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621525 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621533 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg5v5\" (UniqueName: \"kubernetes.io/projected/44f484a0-a976-4e56-82d9-84d8953664db-kube-api-access-xg5v5\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621541 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f484a0-a976-4e56-82d9-84d8953664db-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.621550 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zwg6\" (UniqueName: \"kubernetes.io/projected/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3-kube-api-access-6zwg6\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.784782 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-client-ca\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.785072 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-config\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.787071 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-serving-cert\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.787155 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-kube-api-access-wqlvh\") pod \"route-controller-manager-5848c96655-jvmj4\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.824899 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rnn76"] Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.893469 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.893460 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v" event={"ID":"5c087dd8-02f6-4d1b-bb5f-3a624b0210b3","Type":"ContainerDied","Data":"5f9d9415ec3ab91511df9dd07a0682c093bfddb36cf85dada13a9d67ec5f5c45"} Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.893832 5001 scope.go:117] "RemoveContainer" containerID="ce9015950af2499718d12c440724dc51f06b96becc67eeb052377cd9ea3a8e24" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.896842 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" event={"ID":"44f484a0-a976-4e56-82d9-84d8953664db","Type":"ContainerDied","Data":"d522f61544d33d2781f4bfea79dfe2832cfde98f68f13a85d6fb38eaee336515"} Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.896926 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tv6fl" Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.911690 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v"] Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.913353 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8kk4v"] Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.921364 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tv6fl"] Jan 28 17:19:14 crc kubenswrapper[5001]: I0128 17:19:14.924100 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tv6fl"] Jan 28 17:19:15 crc kubenswrapper[5001]: I0128 17:19:15.085205 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.601992 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f484a0-a976-4e56-82d9-84d8953664db" path="/var/lib/kubelet/pods/44f484a0-a976-4e56-82d9-84d8953664db/volumes" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.603068 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c087dd8-02f6-4d1b-bb5f-3a624b0210b3" path="/var/lib/kubelet/pods/5c087dd8-02f6-4d1b-bb5f-3a624b0210b3/volumes" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.632812 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b567fb484-fh9hf"] Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.633666 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.637801 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.638055 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.638200 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.638428 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.639213 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.639428 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.645470 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b567fb484-fh9hf"] Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.645651 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.753018 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-config\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.753119 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7345b989-60c8-47fb-ae84-633f1c3d6ffd-serving-cert\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.753432 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq7d\" (UniqueName: \"kubernetes.io/projected/7345b989-60c8-47fb-ae84-633f1c3d6ffd-kube-api-access-lxq7d\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.753481 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-client-ca\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.753539 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-proxy-ca-bundles\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.854757 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-config\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.854837 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7345b989-60c8-47fb-ae84-633f1c3d6ffd-serving-cert\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.854874 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxq7d\" (UniqueName: \"kubernetes.io/projected/7345b989-60c8-47fb-ae84-633f1c3d6ffd-kube-api-access-lxq7d\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.854896 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-client-ca\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.854945 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-proxy-ca-bundles\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.855894 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-client-ca\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.856245 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-config\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.856401 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-proxy-ca-bundles\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.862678 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7345b989-60c8-47fb-ae84-633f1c3d6ffd-serving-cert\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.873311 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxq7d\" (UniqueName: \"kubernetes.io/projected/7345b989-60c8-47fb-ae84-633f1c3d6ffd-kube-api-access-lxq7d\") pod \"controller-manager-6b567fb484-fh9hf\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:16 crc kubenswrapper[5001]: I0128 17:19:16.969497 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:18 crc kubenswrapper[5001]: E0128 17:19:18.897078 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 17:19:18 crc kubenswrapper[5001]: E0128 17:19:18.897933 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqqq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-97z2w_openshift-marketplace(04b4625e-0b3f-44a9-b1a9-5855e74eef29): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:18 crc kubenswrapper[5001]: E0128 17:19:18.899161 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-97z2w" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.338032 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.338756 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7rx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gsncd_openshift-marketplace(68df3eed-9a6f-4127-ac82-a61ae7216062): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.340048 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gsncd" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.364905 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.365159 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b77tb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-957th_openshift-marketplace(95c88444-d303-455f-b732-0e144a5f98e8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.366405 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-957th" podUID="95c88444-d303-455f-b732-0e144a5f98e8" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.389632 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.389801 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j942c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2mp88_openshift-marketplace(c43a921e-0efa-4e2c-b425-21f7cd87a24b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:20 crc kubenswrapper[5001]: E0128 17:19:20.390995 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2mp88" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" Jan 28 17:19:21 crc kubenswrapper[5001]: E0128 17:19:21.654851 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 17:19:21 crc kubenswrapper[5001]: E0128 17:19:21.655232 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gk7w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nwlzb_openshift-marketplace(a7567e81-456f-4076-9d78-84e85d057dd4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:21 crc kubenswrapper[5001]: E0128 17:19:21.656559 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nwlzb" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" Jan 28 17:19:21 crc kubenswrapper[5001]: E0128 17:19:21.785626 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 17:19:21 crc kubenswrapper[5001]: E0128 17:19:21.785819 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdqvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bj9wh_openshift-marketplace(24b568f4-71c2-4cae-932f-b6f1a2daf7a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:21 crc kubenswrapper[5001]: E0128 17:19:21.787027 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bj9wh" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.454192 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nwlzb" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.454332 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bj9wh" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.454404 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-97z2w" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.455063 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gsncd" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.455144 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-957th" podUID="95c88444-d303-455f-b732-0e144a5f98e8" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.455350 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2mp88" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.464683 5001 scope.go:117] "RemoveContainer" containerID="1853c48e1eaea7acc93da29ac5b2ac4d9eaf7d705635c81af9e03c2b1c1e24d9" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.817375 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.818084 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzzb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rnl7x_openshift-marketplace(c648cc46-2f0e-4c7f-aaeb-a6abf4486e41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.820259 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rnl7x" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.831531 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4"] Jan 28 17:19:23 crc kubenswrapper[5001]: W0128 17:19:23.843701 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55cf6127_56c0_4f0d_8a6b_fee169e1b0f2.slice/crio-4911b343e86c849505723f1f466b5a15939928bedbcaa6fdc37332e0573fa3e4 WatchSource:0}: Error finding container 4911b343e86c849505723f1f466b5a15939928bedbcaa6fdc37332e0573fa3e4: Status 404 returned error can't find the container with id 4911b343e86c849505723f1f466b5a15939928bedbcaa6fdc37332e0573fa3e4 Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.914609 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.914897 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww7kn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-btn8g_openshift-marketplace(0e4e8139-d262-4a83-aecc-f41c19d0c775): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.916933 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-btn8g" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.933267 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.935717 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b567fb484-fh9hf"] Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.948200 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rnn76" event={"ID":"2b5caa8d-b144-45a6-b334-e9e77c13064d","Type":"ContainerStarted","Data":"5d93735fbc04c757ab9f1229b75991fb0466b9d5ef4dca1d2aead082e5f409b1"} Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.948258 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rnn76" event={"ID":"2b5caa8d-b144-45a6-b334-e9e77c13064d","Type":"ContainerStarted","Data":"386a2bb16a0f2195e7a528babaa24cae9eb98443b92deba0a970a94970d76ce0"} Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.956108 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" event={"ID":"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2","Type":"ContainerStarted","Data":"4911b343e86c849505723f1f466b5a15939928bedbcaa6fdc37332e0573fa3e4"} Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.959052 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" event={"ID":"7345b989-60c8-47fb-ae84-633f1c3d6ffd","Type":"ContainerStarted","Data":"728f4ba4c484cfa3f55e55a76f5533724f0904b391e82975ab54ce3168becd1e"} Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.964338 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-btn8g" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" Jan 28 17:19:23 crc kubenswrapper[5001]: E0128 17:19:23.965009 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rnl7x" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" Jan 28 17:19:23 crc kubenswrapper[5001]: I0128 17:19:23.973249 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.963425 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"38583aed-ca40-4a55-8f7e-4695e7117543","Type":"ContainerStarted","Data":"5c2740252ab6dc026505f508968a9ad24e7bcb94d62ef9e2f6b6a73af99390da"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.964007 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"38583aed-ca40-4a55-8f7e-4695e7117543","Type":"ContainerStarted","Data":"91767f70ec2b1fe6684a36932e7acd3030d4063731330b57493031657bab2606"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.965618 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" event={"ID":"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2","Type":"ContainerStarted","Data":"e95143cb2762e4275d33bf7431c2a9162e89192650c8923f703ef9d95800ba61"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.965941 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.968436 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" event={"ID":"7345b989-60c8-47fb-ae84-633f1c3d6ffd","Type":"ContainerStarted","Data":"c7b588a6d5d781f89738da85ea7b19f666bed6677b8c63e5c7f1eea3f019a525"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.969910 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.972281 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7075bc2e-15dc-4bbc-a5d7-f77c163576fa","Type":"ContainerStarted","Data":"3c16635a456b0f78abd4ae4b3863c01eb6279849bdbacc27ee203e6469658bb5"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.972335 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7075bc2e-15dc-4bbc-a5d7-f77c163576fa","Type":"ContainerStarted","Data":"16f87bf676c0340834685fcba3430c0cbdb89ae693581b66a64df16791ebc629"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.973333 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.973509 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.984452 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rnn76" event={"ID":"2b5caa8d-b144-45a6-b334-e9e77c13064d","Type":"ContainerStarted","Data":"8cbe1830a7555e66e8e465d3055942598b7d6acf827099f82098a18d12c8ae54"} Jan 28 17:19:24 crc kubenswrapper[5001]: I0128 17:19:24.985264 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.985243976 podStartE2EDuration="17.985243976s" podCreationTimestamp="2026-01-28 17:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:24.976922687 +0000 UTC m=+211.144710937" watchObservedRunningTime="2026-01-28 17:19:24.985243976 +0000 UTC m=+211.153032206" Jan 28 17:19:25 crc kubenswrapper[5001]: I0128 17:19:25.005438 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" podStartSLOduration=23.005418567 podStartE2EDuration="23.005418567s" podCreationTimestamp="2026-01-28 17:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:25.003586019 +0000 UTC m=+211.171374249" watchObservedRunningTime="2026-01-28 17:19:25.005418567 +0000 UTC m=+211.173206797" Jan 28 17:19:25 crc kubenswrapper[5001]: I0128 17:19:25.034218 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" podStartSLOduration=23.034196867 podStartE2EDuration="23.034196867s" podCreationTimestamp="2026-01-28 17:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:25.032059708 +0000 UTC m=+211.199847938" watchObservedRunningTime="2026-01-28 17:19:25.034196867 +0000 UTC m=+211.201985097" Jan 28 17:19:25 crc kubenswrapper[5001]: I0128 17:19:25.054838 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=13.054819074 podStartE2EDuration="13.054819074s" podCreationTimestamp="2026-01-28 17:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:25.051526782 +0000 UTC m=+211.219315002" watchObservedRunningTime="2026-01-28 17:19:25.054819074 +0000 UTC m=+211.222607314" Jan 28 17:19:25 crc kubenswrapper[5001]: I0128 17:19:25.081257 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rnn76" podStartSLOduration=185.081232867 podStartE2EDuration="3m5.081232867s" podCreationTimestamp="2026-01-28 17:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:25.075626779 +0000 UTC m=+211.243415029" watchObservedRunningTime="2026-01-28 17:19:25.081232867 +0000 UTC m=+211.249021097" Jan 28 17:19:25 crc kubenswrapper[5001]: I0128 17:19:25.991677 5001 generic.go:334] "Generic (PLEG): container finished" podID="38583aed-ca40-4a55-8f7e-4695e7117543" containerID="5c2740252ab6dc026505f508968a9ad24e7bcb94d62ef9e2f6b6a73af99390da" exitCode=0 Jan 28 17:19:25 crc kubenswrapper[5001]: I0128 17:19:25.991813 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"38583aed-ca40-4a55-8f7e-4695e7117543","Type":"ContainerDied","Data":"5c2740252ab6dc026505f508968a9ad24e7bcb94d62ef9e2f6b6a73af99390da"} Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.296615 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.403412 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38583aed-ca40-4a55-8f7e-4695e7117543-kubelet-dir\") pod \"38583aed-ca40-4a55-8f7e-4695e7117543\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.403542 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38583aed-ca40-4a55-8f7e-4695e7117543-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38583aed-ca40-4a55-8f7e-4695e7117543" (UID: "38583aed-ca40-4a55-8f7e-4695e7117543"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.403912 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38583aed-ca40-4a55-8f7e-4695e7117543-kube-api-access\") pod \"38583aed-ca40-4a55-8f7e-4695e7117543\" (UID: \"38583aed-ca40-4a55-8f7e-4695e7117543\") " Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.404325 5001 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38583aed-ca40-4a55-8f7e-4695e7117543-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.412243 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38583aed-ca40-4a55-8f7e-4695e7117543-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38583aed-ca40-4a55-8f7e-4695e7117543" (UID: "38583aed-ca40-4a55-8f7e-4695e7117543"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.505136 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38583aed-ca40-4a55-8f7e-4695e7117543-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:27 crc kubenswrapper[5001]: I0128 17:19:27.899205 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhtpl"] Jan 28 17:19:28 crc kubenswrapper[5001]: I0128 17:19:28.007483 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"38583aed-ca40-4a55-8f7e-4695e7117543","Type":"ContainerDied","Data":"91767f70ec2b1fe6684a36932e7acd3030d4063731330b57493031657bab2606"} Jan 28 17:19:28 crc kubenswrapper[5001]: I0128 17:19:28.007520 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91767f70ec2b1fe6684a36932e7acd3030d4063731330b57493031657bab2606" Jan 28 17:19:28 crc kubenswrapper[5001]: I0128 17:19:28.007526 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 17:19:34 crc kubenswrapper[5001]: I0128 17:19:34.834698 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:19:34 crc kubenswrapper[5001]: I0128 17:19:34.835155 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:19:34 crc kubenswrapper[5001]: I0128 17:19:34.835202 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:19:34 crc kubenswrapper[5001]: I0128 17:19:34.835736 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:19:34 crc kubenswrapper[5001]: I0128 17:19:34.835856 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9" gracePeriod=600 Jan 28 17:19:35 crc kubenswrapper[5001]: I0128 17:19:35.041408 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9" exitCode=0 Jan 28 17:19:35 crc kubenswrapper[5001]: I0128 17:19:35.041501 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9"} Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.048260 5001 generic.go:334] "Generic (PLEG): container finished" podID="95c88444-d303-455f-b732-0e144a5f98e8" containerID="1eb568e8a153d4f722a9a349a2939ddae85bfe78ce8152cf4e0296d9fd47e28d" exitCode=0 Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.048307 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-957th" event={"ID":"95c88444-d303-455f-b732-0e144a5f98e8","Type":"ContainerDied","Data":"1eb568e8a153d4f722a9a349a2939ddae85bfe78ce8152cf4e0296d9fd47e28d"} Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.053254 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"b7faefbbab6a723fb78d92800b7de1c2ec73448a306765fa7e84130c0372ff39"} Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.055562 5001 generic.go:334] "Generic (PLEG): container finished" podID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerID="6680bed6587e40285cf0aee06ccef3ec65375c512a9d290add67d892cabf9695" exitCode=0 Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.055629 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerDied","Data":"6680bed6587e40285cf0aee06ccef3ec65375c512a9d290add67d892cabf9695"} Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.057571 5001 generic.go:334] "Generic (PLEG): container finished" podID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerID="7fd7d071fe5e824d5dac68af97110077d574d450565fa27a7c6a5894fe4d2c70" exitCode=0 Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.057633 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btn8g" event={"ID":"0e4e8139-d262-4a83-aecc-f41c19d0c775","Type":"ContainerDied","Data":"7fd7d071fe5e824d5dac68af97110077d574d450565fa27a7c6a5894fe4d2c70"} Jan 28 17:19:36 crc kubenswrapper[5001]: I0128 17:19:36.060850 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerStarted","Data":"d42bedf41a5192321adc1a066c806a98a9325babe2fd11f3a168bfe772eb8da7"} Jan 28 17:19:37 crc kubenswrapper[5001]: I0128 17:19:37.079890 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btn8g" event={"ID":"0e4e8139-d262-4a83-aecc-f41c19d0c775","Type":"ContainerStarted","Data":"737d5ba4d0214330a71829e4e4fb20fa7fe806a89fae56834c80839eb7dc3d34"} Jan 28 17:19:37 crc kubenswrapper[5001]: I0128 17:19:37.082759 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerStarted","Data":"84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c"} Jan 28 17:19:37 crc kubenswrapper[5001]: I0128 17:19:37.088901 5001 generic.go:334] "Generic (PLEG): container finished" podID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerID="d42bedf41a5192321adc1a066c806a98a9325babe2fd11f3a168bfe772eb8da7" exitCode=0 Jan 28 17:19:37 crc kubenswrapper[5001]: I0128 17:19:37.088932 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerDied","Data":"d42bedf41a5192321adc1a066c806a98a9325babe2fd11f3a168bfe772eb8da7"} Jan 28 17:19:37 crc kubenswrapper[5001]: I0128 17:19:37.090724 5001 generic.go:334] "Generic (PLEG): container finished" podID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerID="8ed2adce3bf674680295c7beb112e3e399673faf8cc96af7eefe9f0ecb8b5bb4" exitCode=0 Jan 28 17:19:37 crc kubenswrapper[5001]: I0128 17:19:37.090809 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnl7x" event={"ID":"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41","Type":"ContainerDied","Data":"8ed2adce3bf674680295c7beb112e3e399673faf8cc96af7eefe9f0ecb8b5bb4"} Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.108582 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-957th" event={"ID":"95c88444-d303-455f-b732-0e144a5f98e8","Type":"ContainerStarted","Data":"8ca7526c13699a66d61678cbfd2f064c8eebf1dc3e7584a693101610352a2f38"} Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.111117 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerStarted","Data":"bd0bf2b2cfb3407f10dd3eeb25ae3eea7ca549b8a412f4548afeda5c3b4f41a5"} Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.112695 5001 generic.go:334] "Generic (PLEG): container finished" podID="a7567e81-456f-4076-9d78-84e85d057dd4" containerID="84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c" exitCode=0 Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.112732 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerDied","Data":"84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c"} Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.128859 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-957th" podStartSLOduration=3.790275746 podStartE2EDuration="1m11.128843375s" podCreationTimestamp="2026-01-28 17:18:27 +0000 UTC" firstStartedPulling="2026-01-28 17:18:29.391658424 +0000 UTC m=+155.559446654" lastFinishedPulling="2026-01-28 17:19:36.730226043 +0000 UTC m=+222.898014283" observedRunningTime="2026-01-28 17:19:38.124328527 +0000 UTC m=+224.292116757" watchObservedRunningTime="2026-01-28 17:19:38.128843375 +0000 UTC m=+224.296631605" Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.167275 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gsncd" podStartSLOduration=2.893103996 podStartE2EDuration="1m11.167254214s" podCreationTimestamp="2026-01-28 17:18:27 +0000 UTC" firstStartedPulling="2026-01-28 17:18:28.3691328 +0000 UTC m=+154.536921030" lastFinishedPulling="2026-01-28 17:19:36.643283018 +0000 UTC m=+222.811071248" observedRunningTime="2026-01-28 17:19:38.162517537 +0000 UTC m=+224.330305777" watchObservedRunningTime="2026-01-28 17:19:38.167254214 +0000 UTC m=+224.335042454" Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.186957 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-957th" Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.187076 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-957th" Jan 28 17:19:38 crc kubenswrapper[5001]: I0128 17:19:38.187897 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-btn8g" podStartSLOduration=3.951918483 podStartE2EDuration="1m11.187882401s" podCreationTimestamp="2026-01-28 17:18:27 +0000 UTC" firstStartedPulling="2026-01-28 17:18:29.398235322 +0000 UTC m=+155.566023552" lastFinishedPulling="2026-01-28 17:19:36.63419924 +0000 UTC m=+222.801987470" observedRunningTime="2026-01-28 17:19:38.18732644 +0000 UTC m=+224.355114670" watchObservedRunningTime="2026-01-28 17:19:38.187882401 +0000 UTC m=+224.355670631" Jan 28 17:19:39 crc kubenswrapper[5001]: I0128 17:19:39.792758 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-957th" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="registry-server" probeResult="failure" output=< Jan 28 17:19:39 crc kubenswrapper[5001]: timeout: failed to connect service ":50051" within 1s Jan 28 17:19:39 crc kubenswrapper[5001]: > Jan 28 17:19:42 crc kubenswrapper[5001]: I0128 17:19:42.704606 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b567fb484-fh9hf"] Jan 28 17:19:42 crc kubenswrapper[5001]: I0128 17:19:42.705190 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" podUID="7345b989-60c8-47fb-ae84-633f1c3d6ffd" containerName="controller-manager" containerID="cri-o://c7b588a6d5d781f89738da85ea7b19f666bed6677b8c63e5c7f1eea3f019a525" gracePeriod=30 Jan 28 17:19:42 crc kubenswrapper[5001]: I0128 17:19:42.794147 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4"] Jan 28 17:19:42 crc kubenswrapper[5001]: I0128 17:19:42.794394 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" podUID="55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" containerName="route-controller-manager" containerID="cri-o://e95143cb2762e4275d33bf7431c2a9162e89192650c8923f703ef9d95800ba61" gracePeriod=30 Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.143251 5001 generic.go:334] "Generic (PLEG): container finished" podID="7345b989-60c8-47fb-ae84-633f1c3d6ffd" containerID="c7b588a6d5d781f89738da85ea7b19f666bed6677b8c63e5c7f1eea3f019a525" exitCode=0 Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.143316 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" event={"ID":"7345b989-60c8-47fb-ae84-633f1c3d6ffd","Type":"ContainerDied","Data":"c7b588a6d5d781f89738da85ea7b19f666bed6677b8c63e5c7f1eea3f019a525"} Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.145692 5001 generic.go:334] "Generic (PLEG): container finished" podID="55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" containerID="e95143cb2762e4275d33bf7431c2a9162e89192650c8923f703ef9d95800ba61" exitCode=0 Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.145746 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" event={"ID":"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2","Type":"ContainerDied","Data":"e95143cb2762e4275d33bf7431c2a9162e89192650c8923f703ef9d95800ba61"} Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.592336 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.621318 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn"] Jan 28 17:19:44 crc kubenswrapper[5001]: E0128 17:19:44.622610 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38583aed-ca40-4a55-8f7e-4695e7117543" containerName="pruner" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.622636 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="38583aed-ca40-4a55-8f7e-4695e7117543" containerName="pruner" Jan 28 17:19:44 crc kubenswrapper[5001]: E0128 17:19:44.622671 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" containerName="route-controller-manager" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.622679 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" containerName="route-controller-manager" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.622789 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="38583aed-ca40-4a55-8f7e-4695e7117543" containerName="pruner" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.622801 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" containerName="route-controller-manager" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.623365 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631090 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-config\") pod \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631127 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-client-ca\") pod \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631165 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-kube-api-access-wqlvh\") pod \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631195 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-serving-cert\") pod \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\" (UID: \"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631276 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn"] Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631286 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8z6w\" (UniqueName: \"kubernetes.io/projected/c70f3e59-ff36-4184-b115-f429c9574f51-kube-api-access-d8z6w\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631436 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-client-ca\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631461 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70f3e59-ff36-4184-b115-f429c9574f51-serving-cert\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.631553 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-config\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.632377 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-client-ca" (OuterVolumeSpecName: "client-ca") pod "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" (UID: "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.632628 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-config" (OuterVolumeSpecName: "config") pod "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" (UID: "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.643132 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-kube-api-access-wqlvh" (OuterVolumeSpecName: "kube-api-access-wqlvh") pod "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" (UID: "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2"). InnerVolumeSpecName "kube-api-access-wqlvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.646839 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" (UID: "55cf6127-56c0-4f0d-8a6b-fee169e1b0f2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732192 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8z6w\" (UniqueName: \"kubernetes.io/projected/c70f3e59-ff36-4184-b115-f429c9574f51-kube-api-access-d8z6w\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732244 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-client-ca\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732261 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70f3e59-ff36-4184-b115-f429c9574f51-serving-cert\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732303 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-config\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732348 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-kube-api-access-wqlvh\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732358 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732368 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.732376 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.733429 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-client-ca\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.733509 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-config\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.736956 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70f3e59-ff36-4184-b115-f429c9574f51-serving-cert\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.748694 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8z6w\" (UniqueName: \"kubernetes.io/projected/c70f3e59-ff36-4184-b115-f429c9574f51-kube-api-access-d8z6w\") pod \"route-controller-manager-899449c8d-pc6nn\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.789965 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.868524 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-config\") pod \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.868579 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7345b989-60c8-47fb-ae84-633f1c3d6ffd-serving-cert\") pod \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.869536 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-config" (OuterVolumeSpecName: "config") pod "7345b989-60c8-47fb-ae84-633f1c3d6ffd" (UID: "7345b989-60c8-47fb-ae84-633f1c3d6ffd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.871629 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7345b989-60c8-47fb-ae84-633f1c3d6ffd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7345b989-60c8-47fb-ae84-633f1c3d6ffd" (UID: "7345b989-60c8-47fb-ae84-633f1c3d6ffd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.969875 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-client-ca\") pod \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.969968 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxq7d\" (UniqueName: \"kubernetes.io/projected/7345b989-60c8-47fb-ae84-633f1c3d6ffd-kube-api-access-lxq7d\") pod \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.970381 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-client-ca" (OuterVolumeSpecName: "client-ca") pod "7345b989-60c8-47fb-ae84-633f1c3d6ffd" (UID: "7345b989-60c8-47fb-ae84-633f1c3d6ffd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.970463 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-proxy-ca-bundles\") pod \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\" (UID: \"7345b989-60c8-47fb-ae84-633f1c3d6ffd\") " Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.970655 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.970712 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.970723 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7345b989-60c8-47fb-ae84-633f1c3d6ffd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.970731 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.971020 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7345b989-60c8-47fb-ae84-633f1c3d6ffd" (UID: "7345b989-60c8-47fb-ae84-633f1c3d6ffd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:44 crc kubenswrapper[5001]: I0128 17:19:44.972291 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7345b989-60c8-47fb-ae84-633f1c3d6ffd-kube-api-access-lxq7d" (OuterVolumeSpecName: "kube-api-access-lxq7d") pod "7345b989-60c8-47fb-ae84-633f1c3d6ffd" (UID: "7345b989-60c8-47fb-ae84-633f1c3d6ffd"). InnerVolumeSpecName "kube-api-access-lxq7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.071557 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxq7d\" (UniqueName: \"kubernetes.io/projected/7345b989-60c8-47fb-ae84-633f1c3d6ffd-kube-api-access-lxq7d\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.072032 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7345b989-60c8-47fb-ae84-633f1c3d6ffd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.153105 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.153103 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4" event={"ID":"55cf6127-56c0-4f0d-8a6b-fee169e1b0f2","Type":"ContainerDied","Data":"4911b343e86c849505723f1f466b5a15939928bedbcaa6fdc37332e0573fa3e4"} Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.153257 5001 scope.go:117] "RemoveContainer" containerID="e95143cb2762e4275d33bf7431c2a9162e89192650c8923f703ef9d95800ba61" Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.154140 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" event={"ID":"7345b989-60c8-47fb-ae84-633f1c3d6ffd","Type":"ContainerDied","Data":"728f4ba4c484cfa3f55e55a76f5533724f0904b391e82975ab54ce3168becd1e"} Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.154186 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b567fb484-fh9hf" Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.183462 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b567fb484-fh9hf"] Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.187319 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b567fb484-fh9hf"] Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.194916 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4"] Jan 28 17:19:45 crc kubenswrapper[5001]: I0128 17:19:45.197561 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848c96655-jvmj4"] Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.255445 5001 scope.go:117] "RemoveContainer" containerID="c7b588a6d5d781f89738da85ea7b19f666bed6677b8c63e5c7f1eea3f019a525" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.627426 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55cf6127-56c0-4f0d-8a6b-fee169e1b0f2" path="/var/lib/kubelet/pods/55cf6127-56c0-4f0d-8a6b-fee169e1b0f2/volumes" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.628203 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7345b989-60c8-47fb-ae84-633f1c3d6ffd" path="/var/lib/kubelet/pods/7345b989-60c8-47fb-ae84-633f1c3d6ffd/volumes" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.662091 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq"] Jan 28 17:19:46 crc kubenswrapper[5001]: E0128 17:19:46.662335 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7345b989-60c8-47fb-ae84-633f1c3d6ffd" containerName="controller-manager" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.662351 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7345b989-60c8-47fb-ae84-633f1c3d6ffd" containerName="controller-manager" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.662486 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7345b989-60c8-47fb-ae84-633f1c3d6ffd" containerName="controller-manager" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.662871 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.664824 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.665403 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.665941 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.666798 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.667241 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.667592 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.670407 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq"] Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.671544 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.694761 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-client-ca\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.694817 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-proxy-ca-bundles\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.694942 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11d73c8-86be-4813-b947-2da20c575510-serving-cert\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.695124 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8hv\" (UniqueName: \"kubernetes.io/projected/e11d73c8-86be-4813-b947-2da20c575510-kube-api-access-9f8hv\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.695149 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-config\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.796733 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f8hv\" (UniqueName: \"kubernetes.io/projected/e11d73c8-86be-4813-b947-2da20c575510-kube-api-access-9f8hv\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.796791 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-config\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.796839 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-client-ca\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.796859 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-proxy-ca-bundles\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.796887 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11d73c8-86be-4813-b947-2da20c575510-serving-cert\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.800826 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-client-ca\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.803144 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-proxy-ca-bundles\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.803396 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-config\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.806930 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11d73c8-86be-4813-b947-2da20c575510-serving-cert\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.852133 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f8hv\" (UniqueName: \"kubernetes.io/projected/e11d73c8-86be-4813-b947-2da20c575510-kube-api-access-9f8hv\") pod \"controller-manager-76dbcd8bd5-n57xq\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:46 crc kubenswrapper[5001]: I0128 17:19:46.991831 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:47 crc kubenswrapper[5001]: I0128 17:19:47.772393 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:19:47 crc kubenswrapper[5001]: I0128 17:19:47.772717 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:19:47 crc kubenswrapper[5001]: I0128 17:19:47.884675 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:19:47 crc kubenswrapper[5001]: I0128 17:19:47.971728 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:19:47 crc kubenswrapper[5001]: I0128 17:19:47.971852 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.009688 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.052730 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn"] Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.124679 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq"] Jan 28 17:19:48 crc kubenswrapper[5001]: W0128 17:19:48.157354 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode11d73c8_86be_4813_b947_2da20c575510.slice/crio-aed43f2b0b3c64a10d1d75eae36b0999604e86ad0ca10852891e3c8e210cb96f WatchSource:0}: Error finding container aed43f2b0b3c64a10d1d75eae36b0999604e86ad0ca10852891e3c8e210cb96f: Status 404 returned error can't find the container with id aed43f2b0b3c64a10d1d75eae36b0999604e86ad0ca10852891e3c8e210cb96f Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.172929 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerStarted","Data":"412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.176758 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerStarted","Data":"ec744f00094c6841ef4c040a09830025f9dfadc7ca0d7460b77c874b10cea178"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.180057 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnl7x" event={"ID":"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41","Type":"ContainerStarted","Data":"c7bad77984c4ebe1a4ae790931c775b0531530425e4f12edd70f4b70776923ef"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.181672 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" event={"ID":"c70f3e59-ff36-4184-b115-f429c9574f51","Type":"ContainerStarted","Data":"c7be28de708ffd0781b51ba60cde311a0087019aa6a413f3a0d2af3b5989557c"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.203717 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerStarted","Data":"6502a47380c043640831573c7e7e2f48c56336e42021abe20f4b377e4fb9b5e2"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.205239 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" event={"ID":"e11d73c8-86be-4813-b947-2da20c575510","Type":"ContainerStarted","Data":"aed43f2b0b3c64a10d1d75eae36b0999604e86ad0ca10852891e3c8e210cb96f"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.207181 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerStarted","Data":"069e42f02a3bfe167b6cc4e56309c801cfde3ce31e08e5e86f8750af464bc2ab"} Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.256614 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2mp88" podStartSLOduration=6.485760514 podStartE2EDuration="1m18.256596768s" podCreationTimestamp="2026-01-28 17:18:30 +0000 UTC" firstStartedPulling="2026-01-28 17:18:32.457406801 +0000 UTC m=+158.625195031" lastFinishedPulling="2026-01-28 17:19:44.228243055 +0000 UTC m=+230.396031285" observedRunningTime="2026-01-28 17:19:48.256556927 +0000 UTC m=+234.424345157" watchObservedRunningTime="2026-01-28 17:19:48.256596768 +0000 UTC m=+234.424384998" Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.263815 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-957th" Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.275942 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.279754 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:19:48 crc kubenswrapper[5001]: I0128 17:19:48.351672 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-957th" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.212375 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" event={"ID":"e11d73c8-86be-4813-b947-2da20c575510","Type":"ContainerStarted","Data":"e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e"} Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.213689 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.215506 5001 generic.go:334] "Generic (PLEG): container finished" podID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerID="ec744f00094c6841ef4c040a09830025f9dfadc7ca0d7460b77c874b10cea178" exitCode=0 Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.215573 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerDied","Data":"ec744f00094c6841ef4c040a09830025f9dfadc7ca0d7460b77c874b10cea178"} Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.217086 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" event={"ID":"c70f3e59-ff36-4184-b115-f429c9574f51","Type":"ContainerStarted","Data":"6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f"} Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.217303 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.218104 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.218944 5001 generic.go:334] "Generic (PLEG): container finished" podID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerID="6502a47380c043640831573c7e7e2f48c56336e42021abe20f4b377e4fb9b5e2" exitCode=0 Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.219067 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerDied","Data":"6502a47380c043640831573c7e7e2f48c56336e42021abe20f4b377e4fb9b5e2"} Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.225526 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.234623 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" podStartSLOduration=7.234601272 podStartE2EDuration="7.234601272s" podCreationTimestamp="2026-01-28 17:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:49.230639905 +0000 UTC m=+235.398428135" watchObservedRunningTime="2026-01-28 17:19:49.234601272 +0000 UTC m=+235.402389512" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.266757 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnl7x" podStartSLOduration=2.970707 podStartE2EDuration="1m22.266736188s" podCreationTimestamp="2026-01-28 17:18:27 +0000 UTC" firstStartedPulling="2026-01-28 17:18:28.383106877 +0000 UTC m=+154.550895097" lastFinishedPulling="2026-01-28 17:19:47.679136055 +0000 UTC m=+233.846924285" observedRunningTime="2026-01-28 17:19:49.250589107 +0000 UTC m=+235.418377337" watchObservedRunningTime="2026-01-28 17:19:49.266736188 +0000 UTC m=+235.434524428" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.314643 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" podStartSLOduration=7.314604099 podStartE2EDuration="7.314604099s" podCreationTimestamp="2026-01-28 17:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:49.314253366 +0000 UTC m=+235.482041596" watchObservedRunningTime="2026-01-28 17:19:49.314604099 +0000 UTC m=+235.482392329" Jan 28 17:19:49 crc kubenswrapper[5001]: I0128 17:19:49.339605 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwlzb" podStartSLOduration=6.56746654 podStartE2EDuration="1m20.339588348s" podCreationTimestamp="2026-01-28 17:18:29 +0000 UTC" firstStartedPulling="2026-01-28 17:18:32.483308363 +0000 UTC m=+158.651096593" lastFinishedPulling="2026-01-28 17:19:46.255430171 +0000 UTC m=+232.423218401" observedRunningTime="2026-01-28 17:19:49.334882533 +0000 UTC m=+235.502670763" watchObservedRunningTime="2026-01-28 17:19:49.339588348 +0000 UTC m=+235.507376578" Jan 28 17:19:50 crc kubenswrapper[5001]: I0128 17:19:50.296057 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:19:50 crc kubenswrapper[5001]: I0128 17:19:50.296103 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:19:50 crc kubenswrapper[5001]: I0128 17:19:50.337571 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:19:50 crc kubenswrapper[5001]: I0128 17:19:50.513081 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-957th"] Jan 28 17:19:50 crc kubenswrapper[5001]: I0128 17:19:50.513420 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-957th" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="registry-server" containerID="cri-o://8ca7526c13699a66d61678cbfd2f064c8eebf1dc3e7584a693101610352a2f38" gracePeriod=2 Jan 28 17:19:51 crc kubenswrapper[5001]: I0128 17:19:51.072394 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:19:51 crc kubenswrapper[5001]: I0128 17:19:51.072437 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:19:51 crc kubenswrapper[5001]: I0128 17:19:51.912735 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-btn8g"] Jan 28 17:19:51 crc kubenswrapper[5001]: I0128 17:19:51.913342 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-btn8g" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="registry-server" containerID="cri-o://737d5ba4d0214330a71829e4e4fb20fa7fe806a89fae56834c80839eb7dc3d34" gracePeriod=2 Jan 28 17:19:52 crc kubenswrapper[5001]: I0128 17:19:52.106970 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2mp88" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="registry-server" probeResult="failure" output=< Jan 28 17:19:52 crc kubenswrapper[5001]: timeout: failed to connect service ":50051" within 1s Jan 28 17:19:52 crc kubenswrapper[5001]: > Jan 28 17:19:52 crc kubenswrapper[5001]: I0128 17:19:52.236900 5001 generic.go:334] "Generic (PLEG): container finished" podID="95c88444-d303-455f-b732-0e144a5f98e8" containerID="8ca7526c13699a66d61678cbfd2f064c8eebf1dc3e7584a693101610352a2f38" exitCode=0 Jan 28 17:19:52 crc kubenswrapper[5001]: I0128 17:19:52.236953 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-957th" event={"ID":"95c88444-d303-455f-b732-0e144a5f98e8","Type":"ContainerDied","Data":"8ca7526c13699a66d61678cbfd2f064c8eebf1dc3e7584a693101610352a2f38"} Jan 28 17:19:52 crc kubenswrapper[5001]: I0128 17:19:52.925573 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" podUID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" containerName="oauth-openshift" containerID="cri-o://fc686689a6157690ca98c1a4ac2ec563379c9fb5c2c3a7c14cdc216b541a9baf" gracePeriod=15 Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.127237 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-957th" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.208270 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-utilities\") pod \"95c88444-d303-455f-b732-0e144a5f98e8\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.208328 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b77tb\" (UniqueName: \"kubernetes.io/projected/95c88444-d303-455f-b732-0e144a5f98e8-kube-api-access-b77tb\") pod \"95c88444-d303-455f-b732-0e144a5f98e8\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.208365 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-catalog-content\") pod \"95c88444-d303-455f-b732-0e144a5f98e8\" (UID: \"95c88444-d303-455f-b732-0e144a5f98e8\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.209585 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-utilities" (OuterVolumeSpecName: "utilities") pod "95c88444-d303-455f-b732-0e144a5f98e8" (UID: "95c88444-d303-455f-b732-0e144a5f98e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.215646 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c88444-d303-455f-b732-0e144a5f98e8-kube-api-access-b77tb" (OuterVolumeSpecName: "kube-api-access-b77tb") pod "95c88444-d303-455f-b732-0e144a5f98e8" (UID: "95c88444-d303-455f-b732-0e144a5f98e8"). InnerVolumeSpecName "kube-api-access-b77tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.245116 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerStarted","Data":"03775ff560b8149fc347f04a8699aa469dd86f375151b4d2aaa3c8241df6a115"} Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.248174 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerStarted","Data":"f9a9097838e06b3cad8fd739c837933210ad56d1e7773d85b88f6f75e9bc11fa"} Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.251218 5001 generic.go:334] "Generic (PLEG): container finished" podID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" containerID="fc686689a6157690ca98c1a4ac2ec563379c9fb5c2c3a7c14cdc216b541a9baf" exitCode=0 Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.251273 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" event={"ID":"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a","Type":"ContainerDied","Data":"fc686689a6157690ca98c1a4ac2ec563379c9fb5c2c3a7c14cdc216b541a9baf"} Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.260133 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-957th" event={"ID":"95c88444-d303-455f-b732-0e144a5f98e8","Type":"ContainerDied","Data":"5349c02bc785e7d9eff9ad4bdb0e04704e37bcb0f6eff67f61061f762a2dc824"} Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.260245 5001 scope.go:117] "RemoveContainer" containerID="8ca7526c13699a66d61678cbfd2f064c8eebf1dc3e7584a693101610352a2f38" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.260457 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-957th" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.265336 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bj9wh" podStartSLOduration=3.795919102 podStartE2EDuration="1m24.265245961s" podCreationTimestamp="2026-01-28 17:18:29 +0000 UTC" firstStartedPulling="2026-01-28 17:18:32.485044587 +0000 UTC m=+158.652832807" lastFinishedPulling="2026-01-28 17:19:52.954371436 +0000 UTC m=+239.122159666" observedRunningTime="2026-01-28 17:19:53.263968173 +0000 UTC m=+239.431756413" watchObservedRunningTime="2026-01-28 17:19:53.265245961 +0000 UTC m=+239.433034191" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.273588 5001 generic.go:334] "Generic (PLEG): container finished" podID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerID="737d5ba4d0214330a71829e4e4fb20fa7fe806a89fae56834c80839eb7dc3d34" exitCode=0 Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.273635 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btn8g" event={"ID":"0e4e8139-d262-4a83-aecc-f41c19d0c775","Type":"ContainerDied","Data":"737d5ba4d0214330a71829e4e4fb20fa7fe806a89fae56834c80839eb7dc3d34"} Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.280664 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95c88444-d303-455f-b732-0e144a5f98e8" (UID: "95c88444-d303-455f-b732-0e144a5f98e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.288248 5001 scope.go:117] "RemoveContainer" containerID="1eb568e8a153d4f722a9a349a2939ddae85bfe78ce8152cf4e0296d9fd47e28d" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.289466 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-97z2w" podStartSLOduration=1.775045509 podStartE2EDuration="1m22.289455251s" podCreationTimestamp="2026-01-28 17:18:31 +0000 UTC" firstStartedPulling="2026-01-28 17:18:32.484783651 +0000 UTC m=+158.652571881" lastFinishedPulling="2026-01-28 17:19:52.999193393 +0000 UTC m=+239.166981623" observedRunningTime="2026-01-28 17:19:53.288281828 +0000 UTC m=+239.456070058" watchObservedRunningTime="2026-01-28 17:19:53.289455251 +0000 UTC m=+239.457243481" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.309376 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.309423 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b77tb\" (UniqueName: \"kubernetes.io/projected/95c88444-d303-455f-b732-0e144a5f98e8-kube-api-access-b77tb\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.309438 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c88444-d303-455f-b732-0e144a5f98e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.313240 5001 scope.go:117] "RemoveContainer" containerID="c749ae26ae24929e25fef5722e99ab8e5821cf038c0374e20bc22287c37bdf5c" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.491886 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.510962 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-dir\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.511111 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.511116 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-ocp-branding-template\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513173 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-cliconfig\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513299 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-error\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513377 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-trusted-ca-bundle\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513504 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwwkq\" (UniqueName: \"kubernetes.io/projected/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-kube-api-access-nwwkq\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513565 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-router-certs\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513631 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-policies\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513694 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-serving-cert\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513745 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-service-ca\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513790 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-idp-0-file-data\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513829 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-login\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513868 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-provider-selection\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.513923 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-session\") pod \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\" (UID: \"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.514843 5001 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.517027 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.517761 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.522838 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.523231 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-kube-api-access-nwwkq" (OuterVolumeSpecName: "kube-api-access-nwwkq") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "kube-api-access-nwwkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.526735 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.528081 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.533034 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.541631 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.542172 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.542387 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.543032 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.550317 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.552359 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.555482 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" (UID: "f91ccb79-b729-40a9-bd10-a7d3a59a8f7a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.589317 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-957th"] Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.593849 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-957th"] Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616171 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-utilities\") pod \"0e4e8139-d262-4a83-aecc-f41c19d0c775\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616243 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww7kn\" (UniqueName: \"kubernetes.io/projected/0e4e8139-d262-4a83-aecc-f41c19d0c775-kube-api-access-ww7kn\") pod \"0e4e8139-d262-4a83-aecc-f41c19d0c775\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616273 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-catalog-content\") pod \"0e4e8139-d262-4a83-aecc-f41c19d0c775\" (UID: \"0e4e8139-d262-4a83-aecc-f41c19d0c775\") " Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616495 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616511 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616521 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616530 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616539 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwwkq\" (UniqueName: \"kubernetes.io/projected/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-kube-api-access-nwwkq\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616547 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616556 5001 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616568 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616579 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616589 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616598 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616607 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.616617 5001 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.617285 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-utilities" (OuterVolumeSpecName: "utilities") pod "0e4e8139-d262-4a83-aecc-f41c19d0c775" (UID: "0e4e8139-d262-4a83-aecc-f41c19d0c775"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.619210 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4e8139-d262-4a83-aecc-f41c19d0c775-kube-api-access-ww7kn" (OuterVolumeSpecName: "kube-api-access-ww7kn") pod "0e4e8139-d262-4a83-aecc-f41c19d0c775" (UID: "0e4e8139-d262-4a83-aecc-f41c19d0c775"). InnerVolumeSpecName "kube-api-access-ww7kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.670637 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e4e8139-d262-4a83-aecc-f41c19d0c775" (UID: "0e4e8139-d262-4a83-aecc-f41c19d0c775"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.716955 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww7kn\" (UniqueName: \"kubernetes.io/projected/0e4e8139-d262-4a83-aecc-f41c19d0c775-kube-api-access-ww7kn\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.717028 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:53 crc kubenswrapper[5001]: I0128 17:19:53.717038 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e8139-d262-4a83-aecc-f41c19d0c775-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.280888 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" event={"ID":"f91ccb79-b729-40a9-bd10-a7d3a59a8f7a","Type":"ContainerDied","Data":"ea1049937d36ad5221412ee4b0eca686a57b5ce7f5e4648c7e9e98b7ed0dfa27"} Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.280948 5001 scope.go:117] "RemoveContainer" containerID="fc686689a6157690ca98c1a4ac2ec563379c9fb5c2c3a7c14cdc216b541a9baf" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.280969 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhtpl" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.286607 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btn8g" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.289070 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btn8g" event={"ID":"0e4e8139-d262-4a83-aecc-f41c19d0c775","Type":"ContainerDied","Data":"63b2c037b29f6d58a56e6c45f28559f595e953a34c239f9041407e116e456413"} Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.301031 5001 scope.go:117] "RemoveContainer" containerID="737d5ba4d0214330a71829e4e4fb20fa7fe806a89fae56834c80839eb7dc3d34" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.312524 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhtpl"] Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.314961 5001 scope.go:117] "RemoveContainer" containerID="7fd7d071fe5e824d5dac68af97110077d574d450565fa27a7c6a5894fe4d2c70" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.315577 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhtpl"] Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.324050 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-btn8g"] Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.330073 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-btn8g"] Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.344050 5001 scope.go:117] "RemoveContainer" containerID="806d888cd1183837d04ad14e1d58055abdddfa8dfcb3ad387185fce838cf1345" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.603928 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" path="/var/lib/kubelet/pods/0e4e8139-d262-4a83-aecc-f41c19d0c775/volumes" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.604587 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c88444-d303-455f-b732-0e144a5f98e8" path="/var/lib/kubelet/pods/95c88444-d303-455f-b732-0e144a5f98e8/volumes" Jan 28 17:19:54 crc kubenswrapper[5001]: I0128 17:19:54.605209 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" path="/var/lib/kubelet/pods/f91ccb79-b729-40a9-bd10-a7d3a59a8f7a/volumes" Jan 28 17:19:57 crc kubenswrapper[5001]: I0128 17:19:57.564408 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:19:57 crc kubenswrapper[5001]: I0128 17:19:57.565896 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:19:57 crc kubenswrapper[5001]: I0128 17:19:57.603946 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:19:58 crc kubenswrapper[5001]: I0128 17:19:58.346985 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:20:00 crc kubenswrapper[5001]: I0128 17:20:00.293439 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:20:00 crc kubenswrapper[5001]: I0128 17:20:00.293512 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:20:00 crc kubenswrapper[5001]: I0128 17:20:00.337292 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:20:00 crc kubenswrapper[5001]: I0128 17:20:00.339419 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:20:00 crc kubenswrapper[5001]: I0128 17:20:00.376934 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.110793 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.153090 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.386864 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.387645 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.432161 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.675816 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4"] Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.676840 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="extract-utilities" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.676959 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="extract-utilities" Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.677095 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="extract-content" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.677195 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="extract-content" Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.677289 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="extract-content" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.677364 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="extract-content" Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.677481 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="extract-utilities" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.677607 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="extract-utilities" Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.677731 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="registry-server" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.677820 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="registry-server" Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.677925 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" containerName="oauth-openshift" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.678051 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" containerName="oauth-openshift" Jan 28 17:20:01 crc kubenswrapper[5001]: E0128 17:20:01.678189 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="registry-server" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.678307 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="registry-server" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.678557 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4e8139-d262-4a83-aecc-f41c19d0c775" containerName="registry-server" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.678664 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c88444-d303-455f-b732-0e144a5f98e8" containerName="registry-server" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.678766 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91ccb79-b729-40a9-bd10-a7d3a59a8f7a" containerName="oauth-openshift" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.679392 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.683377 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.683514 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.683707 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.683766 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.683859 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.684107 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.684298 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.685202 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.685373 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.685488 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.686084 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.686264 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.690787 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4"] Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.734272 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.738566 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.740586 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836002 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836109 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-login\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836210 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836273 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836310 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-error\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836357 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836375 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bs9\" (UniqueName: \"kubernetes.io/projected/716c725e-e0aa-455a-a6f3-c5d488403f4e-kube-api-access-k4bs9\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836398 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-audit-policies\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836418 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/716c725e-e0aa-455a-a6f3-c5d488403f4e-audit-dir\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836441 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-session\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836535 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836568 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836593 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.836624 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938075 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-login\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938134 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938160 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938174 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-error\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938207 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938222 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bs9\" (UniqueName: \"kubernetes.io/projected/716c725e-e0aa-455a-a6f3-c5d488403f4e-kube-api-access-k4bs9\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938239 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-audit-policies\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938255 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/716c725e-e0aa-455a-a6f3-c5d488403f4e-audit-dir\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938273 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-session\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938308 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938325 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938344 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938363 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.938386 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.939209 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.939247 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/716c725e-e0aa-455a-a6f3-c5d488403f4e-audit-dir\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.939782 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.940029 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-audit-policies\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.940109 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.943738 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.943962 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-login\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.944353 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.944607 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-session\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.944845 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-error\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.945189 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.945360 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.945505 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/716c725e-e0aa-455a-a6f3-c5d488403f4e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:01 crc kubenswrapper[5001]: I0128 17:20:01.955747 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bs9\" (UniqueName: \"kubernetes.io/projected/716c725e-e0aa-455a-a6f3-c5d488403f4e-kube-api-access-k4bs9\") pod \"oauth-openshift-5cc5b65bd-7bzt4\" (UID: \"716c725e-e0aa-455a-a6f3-c5d488403f4e\") " pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.050481 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.187089 5001 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.197919 5001 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.198381 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76" gracePeriod=15 Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.198557 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7" gracePeriod=15 Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.198624 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723" gracePeriod=15 Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.198631 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d" gracePeriod=15 Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.198712 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0" gracePeriod=15 Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.198749 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199440 5001 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199779 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199795 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199806 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199813 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199830 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199838 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199851 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199858 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199866 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199872 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199881 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199891 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.199901 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.199907 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.200120 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.200131 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.200143 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.200152 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.200162 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.200172 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.283291 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343051 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343114 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343137 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343157 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343175 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343197 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343232 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.343245 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.368939 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.369789 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.370022 5001 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.370251 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.444687 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.444794 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.444825 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.444893 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.444926 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.444962 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445017 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445098 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445133 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445126 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445159 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445162 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445183 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445199 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445449 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.445531 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: I0128 17:20:02.584474 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:02 crc kubenswrapper[5001]: W0128 17:20:02.604487 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ba10bedc5e360c17d00dcb8508416cc59d3fd2de4b6764d58423229e7b5a2a2b WatchSource:0}: Error finding container ba10bedc5e360c17d00dcb8508416cc59d3fd2de4b6764d58423229e7b5a2a2b: Status 404 returned error can't find the container with id ba10bedc5e360c17d00dcb8508416cc59d3fd2de4b6764d58423229e7b5a2a2b Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.607094 5001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ef4be7e781f3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 17:20:02.605948734 +0000 UTC m=+248.773736964,LastTimestamp:2026-01-28 17:20:02.605948734 +0000 UTC m=+248.773736964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.743209 5001 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 17:20:02 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359" Netns:"/var/run/netns/3ffd3a79-43b0-4096-beaa-9298d8c4fcaa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:20:02 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:20:02 crc kubenswrapper[5001]: > Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.743281 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 17:20:02 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359" Netns:"/var/run/netns/3ffd3a79-43b0-4096-beaa-9298d8c4fcaa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:20:02 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:20:02 crc kubenswrapper[5001]: > pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.743328 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 28 17:20:02 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359" Netns:"/var/run/netns/3ffd3a79-43b0-4096-beaa-9298d8c4fcaa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:20:02 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:20:02 crc kubenswrapper[5001]: > pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:02 crc kubenswrapper[5001]: E0128 17:20:02.743382 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359\\\" Netns:\\\"/var/run/netns/3ffd3a79-43b0-4096-beaa-9298d8c4fcaa\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=5f3e6bf07529306c431833429e379b5246b1705e29f48b3c6c4feaa3346ce359;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s\\\": dial tcp 38.102.83.30:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.336741 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.338145 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.338901 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7" exitCode=0 Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.338996 5001 scope.go:117] "RemoveContainer" containerID="b363065d07dfa5bddaee2f4fc740c05179cf7ce974637305903e835edeca5500" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.339029 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0" exitCode=0 Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.339114 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d" exitCode=0 Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.339126 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723" exitCode=2 Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.341829 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d"} Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.341874 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ba10bedc5e360c17d00dcb8508416cc59d3fd2de4b6764d58423229e7b5a2a2b"} Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.342747 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.343183 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.343832 5001 generic.go:334] "Generic (PLEG): container finished" podID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" containerID="3c16635a456b0f78abd4ae4b3863c01eb6279849bdbacc27ee203e6469658bb5" exitCode=0 Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.343913 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.344030 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7075bc2e-15dc-4bbc-a5d7-f77c163576fa","Type":"ContainerDied","Data":"3c16635a456b0f78abd4ae4b3863c01eb6279849bdbacc27ee203e6469658bb5"} Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.344484 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.345168 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.345455 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:03 crc kubenswrapper[5001]: I0128 17:20:03.345796 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:03 crc kubenswrapper[5001]: E0128 17:20:03.972598 5001 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 17:20:03 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f" Netns:"/var/run/netns/35d723d7-ff1f-4ffd-a034-dbda96e8b1a0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:20:03 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:20:03 crc kubenswrapper[5001]: > Jan 28 17:20:03 crc kubenswrapper[5001]: E0128 17:20:03.972871 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 17:20:03 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f" Netns:"/var/run/netns/35d723d7-ff1f-4ffd-a034-dbda96e8b1a0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:20:03 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:20:03 crc kubenswrapper[5001]: > pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:03 crc kubenswrapper[5001]: E0128 17:20:03.972893 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 28 17:20:03 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f" Netns:"/var/run/netns/35d723d7-ff1f-4ffd-a034-dbda96e8b1a0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s": dial tcp 38.102.83.30:6443: connect: connection refused Jan 28 17:20:03 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:20:03 crc kubenswrapper[5001]: > pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:03 crc kubenswrapper[5001]: E0128 17:20:03.972963 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication_716c725e-e0aa-455a-a6f3-c5d488403f4e_0(f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f): error adding pod openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f\\\" Netns:\\\"/var/run/netns/35d723d7-ff1f-4ffd-a034-dbda96e8b1a0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5cc5b65bd-7bzt4;K8S_POD_INFRA_CONTAINER_ID=f8dad6eb7326bbada7ff0c1abc2b0b5bafefa56ce726f759db7dd37f80d1d26f;K8S_POD_UID=716c725e-e0aa-455a-a6f3-c5d488403f4e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4] networking: Multus: [openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4/716c725e-e0aa-455a-a6f3-c5d488403f4e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5cc5b65bd-7bzt4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5cc5b65bd-7bzt4?timeout=1m0s\\\": dial tcp 38.102.83.30:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.257501 5001 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.257948 5001 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.258412 5001 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.258689 5001 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.258939 5001 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.258968 5001 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.259205 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.356888 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.459732 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.579882 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.580642 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.581118 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.581291 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.581464 5001 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.581684 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.596216 5001 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.596580 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.597103 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.597360 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.687412 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.687880 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.688228 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.688703 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.772894 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.772958 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773029 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773216 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773238 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773326 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773501 5001 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773512 5001 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.773523 5001 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:04 crc kubenswrapper[5001]: E0128 17:20:04.861074 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.874591 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kube-api-access\") pod \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.874684 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kubelet-dir\") pod \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.874846 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-var-lock\") pod \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\" (UID: \"7075bc2e-15dc-4bbc-a5d7-f77c163576fa\") " Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.874997 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7075bc2e-15dc-4bbc-a5d7-f77c163576fa" (UID: "7075bc2e-15dc-4bbc-a5d7-f77c163576fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.875121 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-var-lock" (OuterVolumeSpecName: "var-lock") pod "7075bc2e-15dc-4bbc-a5d7-f77c163576fa" (UID: "7075bc2e-15dc-4bbc-a5d7-f77c163576fa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.875214 5001 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.875228 5001 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.879842 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7075bc2e-15dc-4bbc-a5d7-f77c163576fa" (UID: "7075bc2e-15dc-4bbc-a5d7-f77c163576fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:04 crc kubenswrapper[5001]: I0128 17:20:04.976057 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7075bc2e-15dc-4bbc-a5d7-f77c163576fa-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.368606 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.369605 5001 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76" exitCode=0 Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.369680 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.369697 5001 scope.go:117] "RemoveContainer" containerID="9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.370305 5001 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.370538 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.371503 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.371758 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.372409 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7075bc2e-15dc-4bbc-a5d7-f77c163576fa","Type":"ContainerDied","Data":"16f87bf676c0340834685fcba3430c0cbdb89ae693581b66a64df16791ebc629"} Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.372457 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f87bf676c0340834685fcba3430c0cbdb89ae693581b66a64df16791ebc629" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.372495 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.388127 5001 scope.go:117] "RemoveContainer" containerID="4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.388674 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.389221 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.389525 5001 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.389715 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.390126 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.390388 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.390826 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.391154 5001 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.402995 5001 scope.go:117] "RemoveContainer" containerID="6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.419601 5001 scope.go:117] "RemoveContainer" containerID="7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.435334 5001 scope.go:117] "RemoveContainer" containerID="eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.459196 5001 scope.go:117] "RemoveContainer" containerID="ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.483151 5001 scope.go:117] "RemoveContainer" containerID="9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.483711 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\": container with ID starting with 9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7 not found: ID does not exist" containerID="9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.483752 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7"} err="failed to get container status \"9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\": rpc error: code = NotFound desc = could not find container \"9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7\": container with ID starting with 9cdea72e347d585c2d2a26e9161a8188f499d4870fa5c3c18ee6c1eb519f49c7 not found: ID does not exist" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.483935 5001 scope.go:117] "RemoveContainer" containerID="4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.486933 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\": container with ID starting with 4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0 not found: ID does not exist" containerID="4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.487004 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0"} err="failed to get container status \"4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\": rpc error: code = NotFound desc = could not find container \"4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0\": container with ID starting with 4a6a6e4811860c6f824dbd6768cdda8941e892add9a9f7b79dd18870d2a269a0 not found: ID does not exist" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.487033 5001 scope.go:117] "RemoveContainer" containerID="6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.487545 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\": container with ID starting with 6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d not found: ID does not exist" containerID="6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.487677 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d"} err="failed to get container status \"6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\": rpc error: code = NotFound desc = could not find container \"6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d\": container with ID starting with 6a18ab4ca38d1a7cbb53f250c9324719279fc2c437ef6a042348273cc929268d not found: ID does not exist" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.487781 5001 scope.go:117] "RemoveContainer" containerID="7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.488262 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\": container with ID starting with 7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723 not found: ID does not exist" containerID="7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.488379 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723"} err="failed to get container status \"7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\": rpc error: code = NotFound desc = could not find container \"7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723\": container with ID starting with 7065d8cd7869275a08775d95e2ac43cd1a756bb31e274530e425d0a692206723 not found: ID does not exist" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.488489 5001 scope.go:117] "RemoveContainer" containerID="eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.488910 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\": container with ID starting with eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76 not found: ID does not exist" containerID="eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.489024 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76"} err="failed to get container status \"eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\": rpc error: code = NotFound desc = could not find container \"eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76\": container with ID starting with eb35dc82c3439493d412668c42c7dea8e444fe5e8bbc29b20fe0ce839cf39f76 not found: ID does not exist" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.489119 5001 scope.go:117] "RemoveContainer" containerID="ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.489911 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\": container with ID starting with ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf not found: ID does not exist" containerID="ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf" Jan 28 17:20:05 crc kubenswrapper[5001]: I0128 17:20:05.489963 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf"} err="failed to get container status \"ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\": rpc error: code = NotFound desc = could not find container \"ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf\": container with ID starting with ae3063ad6284e59fa6638acbfc6298251ed134ae577d45c5dcb5afbd9e8dc5bf not found: ID does not exist" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.661456 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Jan 28 17:20:05 crc kubenswrapper[5001]: E0128 17:20:05.972352 5001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ef4be7e781f3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 17:20:02.605948734 +0000 UTC m=+248.773736964,LastTimestamp:2026-01-28 17:20:02.605948734 +0000 UTC m=+248.773736964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 17:20:06 crc kubenswrapper[5001]: I0128 17:20:06.599140 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 17:20:07 crc kubenswrapper[5001]: E0128 17:20:07.263249 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Jan 28 17:20:10 crc kubenswrapper[5001]: E0128 17:20:10.464463 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="6.4s" Jan 28 17:20:14 crc kubenswrapper[5001]: I0128 17:20:14.597354 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:14 crc kubenswrapper[5001]: I0128 17:20:14.598103 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:14 crc kubenswrapper[5001]: I0128 17:20:14.599172 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:15 crc kubenswrapper[5001]: E0128 17:20:15.973682 5001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ef4be7e781f3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 17:20:02.605948734 +0000 UTC m=+248.773736964,LastTimestamp:2026-01-28 17:20:02.605948734 +0000 UTC m=+248.773736964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.593428 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.594308 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.594647 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.595075 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.607951 5001 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.607997 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:16 crc kubenswrapper[5001]: E0128 17:20:16.608498 5001 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:16 crc kubenswrapper[5001]: I0128 17:20:16.609152 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:16 crc kubenswrapper[5001]: W0128 17:20:16.640648 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f53ca7c83aa626af6bd3124d5ce7b5faa3dc9e7d2e95019290ae50909932fad9 WatchSource:0}: Error finding container f53ca7c83aa626af6bd3124d5ce7b5faa3dc9e7d2e95019290ae50909932fad9: Status 404 returned error can't find the container with id f53ca7c83aa626af6bd3124d5ce7b5faa3dc9e7d2e95019290ae50909932fad9 Jan 28 17:20:16 crc kubenswrapper[5001]: E0128 17:20:16.866455 5001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="7s" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.445003 5001 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8cebc5b2cfd3f2db3388d24e812c9c2afca37e9b0239ff14cd6898cb4e19435c" exitCode=0 Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.445099 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8cebc5b2cfd3f2db3388d24e812c9c2afca37e9b0239ff14cd6898cb4e19435c"} Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.445151 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f53ca7c83aa626af6bd3124d5ce7b5faa3dc9e7d2e95019290ae50909932fad9"} Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.445445 5001 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.445464 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.445918 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: E0128 17:20:17.446128 5001 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.446174 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.446399 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.448860 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.448921 5001 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9" exitCode=1 Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.448951 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9"} Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.449475 5001 scope.go:117] "RemoveContainer" containerID="df57f9f1ff44fa9a9a6fd0d0ff83de5a8054dca5141a4c1b6663e7ff04693be9" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.449896 5001 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.450358 5001 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.450621 5001 status_manager.go:851] "Failed to get status for pod" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" pod="openshift-marketplace/redhat-operators-97z2w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-97z2w\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.450994 5001 status_manager.go:851] "Failed to get status for pod" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Jan 28 17:20:17 crc kubenswrapper[5001]: I0128 17:20:17.478748 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.458848 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7019238e1697ba6893864420979bd0181a86049295ce2a060269da8052c2993d"} Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.459235 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"43ef91a0c34f340735922dfbbe91a5cac4eb5df865c07d98b367431b9fe85fac"} Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.459251 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"546a7ec489682ea4d040c44e409d99047f6c9383b54f34ef1f2fd7b16d3c3d00"} Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.459263 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f376521361aac163f05ed43c3eddbfc4a00b982dcdba9e096367dd752a7f8c87"} Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.464613 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.464675 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f9a9936d18f82ea55801a2dc42fe723ccf8dd4d1b64304f261f7573ff0674eeb"} Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.594870 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:18 crc kubenswrapper[5001]: I0128 17:20:18.595418 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:19 crc kubenswrapper[5001]: I0128 17:20:19.472593 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5631891cf4593490a8cca66b91ef48eee9dd5b68526746b2e4967c93f56eb2cf"} Jan 28 17:20:19 crc kubenswrapper[5001]: I0128 17:20:19.473003 5001 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:19 crc kubenswrapper[5001]: I0128 17:20:19.473036 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:21 crc kubenswrapper[5001]: I0128 17:20:21.610157 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:21 crc kubenswrapper[5001]: I0128 17:20:21.610468 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:21 crc kubenswrapper[5001]: I0128 17:20:21.617155 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:23 crc kubenswrapper[5001]: I0128 17:20:23.772707 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:20:23 crc kubenswrapper[5001]: I0128 17:20:23.778282 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.482198 5001 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.505916 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" event={"ID":"716c725e-e0aa-455a-a6f3-c5d488403f4e","Type":"ContainerStarted","Data":"785cc590cb74f70ee594d05fe57767874136e6497d61b58fd7c9e0b0de2065dd"} Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.506015 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" event={"ID":"716c725e-e0aa-455a-a6f3-c5d488403f4e","Type":"ContainerStarted","Data":"30037728e430be1fe86684072beeffc821795a2bd1c237cb1385d454ada8fa1f"} Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.506360 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.506414 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.616950 5001 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="33657a6f-912f-412f-bbc5-f8d10d7a50e7" Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.696817 5001 patch_prober.go:28] interesting pod/oauth-openshift-5cc5b65bd-7bzt4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": read tcp 10.217.0.2:40994->10.217.0.61:6443: read: connection reset by peer" start-of-body= Jan 28 17:20:24 crc kubenswrapper[5001]: I0128 17:20:24.696872 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": read tcp 10.217.0.2:40994->10.217.0.61:6443: read: connection reset by peer" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.512616 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/0.log" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.513707 5001 generic.go:334] "Generic (PLEG): container finished" podID="716c725e-e0aa-455a-a6f3-c5d488403f4e" containerID="785cc590cb74f70ee594d05fe57767874136e6497d61b58fd7c9e0b0de2065dd" exitCode=255 Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.513770 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" event={"ID":"716c725e-e0aa-455a-a6f3-c5d488403f4e","Type":"ContainerDied","Data":"785cc590cb74f70ee594d05fe57767874136e6497d61b58fd7c9e0b0de2065dd"} Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.514172 5001 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.514199 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.514328 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.514367 5001 scope.go:117] "RemoveContainer" containerID="785cc590cb74f70ee594d05fe57767874136e6497d61b58fd7c9e0b0de2065dd" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.519642 5001 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="33657a6f-912f-412f-bbc5-f8d10d7a50e7" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.521487 5001 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://f376521361aac163f05ed43c3eddbfc4a00b982dcdba9e096367dd752a7f8c87" Jan 28 17:20:25 crc kubenswrapper[5001]: I0128 17:20:25.521511 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.522020 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/1.log" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523259 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/0.log" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523306 5001 generic.go:334] "Generic (PLEG): container finished" podID="716c725e-e0aa-455a-a6f3-c5d488403f4e" containerID="bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76" exitCode=255 Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523372 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" event={"ID":"716c725e-e0aa-455a-a6f3-c5d488403f4e","Type":"ContainerDied","Data":"bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76"} Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523424 5001 scope.go:117] "RemoveContainer" containerID="785cc590cb74f70ee594d05fe57767874136e6497d61b58fd7c9e0b0de2065dd" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523684 5001 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523710 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.523888 5001 scope.go:117] "RemoveContainer" containerID="bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76" Jan 28 17:20:26 crc kubenswrapper[5001]: E0128 17:20:26.524207 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:26 crc kubenswrapper[5001]: I0128 17:20:26.532730 5001 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="33657a6f-912f-412f-bbc5-f8d10d7a50e7" Jan 28 17:20:27 crc kubenswrapper[5001]: I0128 17:20:27.529468 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/1.log" Jan 28 17:20:27 crc kubenswrapper[5001]: I0128 17:20:27.530055 5001 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:27 crc kubenswrapper[5001]: I0128 17:20:27.530073 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a62f06ca-6dcd-45eb-89c5-e284699a8ff8" Jan 28 17:20:27 crc kubenswrapper[5001]: I0128 17:20:27.530687 5001 scope.go:117] "RemoveContainer" containerID="bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76" Jan 28 17:20:27 crc kubenswrapper[5001]: E0128 17:20:27.531043 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:27 crc kubenswrapper[5001]: I0128 17:20:27.532814 5001 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="33657a6f-912f-412f-bbc5-f8d10d7a50e7" Jan 28 17:20:32 crc kubenswrapper[5001]: I0128 17:20:32.051313 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:32 crc kubenswrapper[5001]: I0128 17:20:32.051678 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:32 crc kubenswrapper[5001]: I0128 17:20:32.052374 5001 scope.go:117] "RemoveContainer" containerID="bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76" Jan 28 17:20:32 crc kubenswrapper[5001]: E0128 17:20:32.052657 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:34 crc kubenswrapper[5001]: I0128 17:20:34.629196 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 17:20:34 crc kubenswrapper[5001]: I0128 17:20:34.645663 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 17:20:34 crc kubenswrapper[5001]: I0128 17:20:34.818146 5001 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.155835 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.222311 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.259355 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.266821 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.704856 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.824099 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.868811 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 17:20:35 crc kubenswrapper[5001]: I0128 17:20:35.957435 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.001858 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.086995 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.178151 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.449681 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.698498 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.914390 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.950624 5001 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.952270 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.952252964 podStartE2EDuration="34.952252964s" podCreationTimestamp="2026-01-28 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:24.128700586 +0000 UTC m=+270.296488816" watchObservedRunningTime="2026-01-28 17:20:36.952252964 +0000 UTC m=+283.120041194" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.954932 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.954981 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.955014 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4"] Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.955579 5001 scope.go:117] "RemoveContainer" containerID="bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76" Jan 28 17:20:36 crc kubenswrapper[5001]: I0128 17:20:36.959183 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.000613 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.000593378 podStartE2EDuration="13.000593378s" podCreationTimestamp="2026-01-28 17:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:36.975132384 +0000 UTC m=+283.142920614" watchObservedRunningTime="2026-01-28 17:20:37.000593378 +0000 UTC m=+283.168381618" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.081317 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.121688 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.234297 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.303766 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.587080 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/2.log" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.587523 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/1.log" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.587569 5001 generic.go:334] "Generic (PLEG): container finished" podID="716c725e-e0aa-455a-a6f3-c5d488403f4e" containerID="2a851e1d96e1a5e91bca0535f5a2f0a9739778fe966e40cb4d3c5e00947df55f" exitCode=255 Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.587667 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" event={"ID":"716c725e-e0aa-455a-a6f3-c5d488403f4e","Type":"ContainerDied","Data":"2a851e1d96e1a5e91bca0535f5a2f0a9739778fe966e40cb4d3c5e00947df55f"} Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.587771 5001 scope.go:117] "RemoveContainer" containerID="bec05550d1db2b361c47637aa9fb7e5513f89127274a2e2393b2a3e7f2239f76" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.588202 5001 scope.go:117] "RemoveContainer" containerID="2a851e1d96e1a5e91bca0535f5a2f0a9739778fe966e40cb4d3c5e00947df55f" Jan 28 17:20:37 crc kubenswrapper[5001]: E0128 17:20:37.589538 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.767612 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.859907 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.865625 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.899146 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 17:20:37 crc kubenswrapper[5001]: I0128 17:20:37.981436 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.030574 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.032827 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.091181 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.151584 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.191456 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.224418 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.272055 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.432260 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.458280 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.594199 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.595930 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/2.log" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.605189 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.609506 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.727786 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.750876 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.761199 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.925286 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 17:20:38 crc kubenswrapper[5001]: I0128 17:20:38.992156 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.145755 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.197692 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.327470 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.410805 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.420034 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.611824 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.616084 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.629100 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.694222 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.703609 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.721108 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.724284 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.738168 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.749138 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.842025 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.909480 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.912049 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 17:20:39 crc kubenswrapper[5001]: I0128 17:20:39.916025 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.047048 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.047200 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.091711 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.103572 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.304572 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.403124 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.403645 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.407296 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.436729 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.532753 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.553587 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.630272 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.763029 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.822621 5001 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.856578 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.904605 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.914230 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 17:20:40 crc kubenswrapper[5001]: I0128 17:20:40.941395 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.010389 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.032326 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.078256 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.296291 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.296654 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.296760 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.345968 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.435643 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.475462 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.521257 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.522095 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.555139 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.578443 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.579730 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.626664 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.694485 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.707833 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.753475 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.758592 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.768827 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.821512 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.866398 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.940193 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 17:20:41 crc kubenswrapper[5001]: I0128 17:20:41.988633 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.024358 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.027350 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.051596 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.051653 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.052143 5001 scope.go:117] "RemoveContainer" containerID="2a851e1d96e1a5e91bca0535f5a2f0a9739778fe966e40cb4d3c5e00947df55f" Jan 28 17:20:42 crc kubenswrapper[5001]: E0128 17:20:42.052431 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.152551 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.157916 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.157920 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.158157 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.247339 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.269325 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.424764 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.457549 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.490353 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.545603 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.733759 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.741647 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.817846 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.818910 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.859066 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.898298 5001 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 17:20:42 crc kubenswrapper[5001]: I0128 17:20:42.994966 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.013587 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.048407 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.094587 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.115200 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.116239 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.148389 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.190939 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.195094 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.347202 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.394696 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.409614 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.414575 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.492406 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.599375 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.658571 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.695758 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.719627 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.735477 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.763145 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.845506 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 17:20:43 crc kubenswrapper[5001]: I0128 17:20:43.930100 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.061019 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.078925 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.094454 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.106181 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.123412 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.124643 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.235698 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.312313 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.319016 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.330826 5001 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.470118 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.587129 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.646072 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.655681 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.707204 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.724808 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.761616 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.776292 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.802003 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.810304 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.893537 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.917676 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 17:20:44 crc kubenswrapper[5001]: I0128 17:20:44.998006 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.001051 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.048441 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.064348 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.173780 5001 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.232244 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.314446 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.341356 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.395729 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.511092 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.524253 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.609770 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.628193 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.633371 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.641192 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.645341 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.700857 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.720875 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.817757 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.884659 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.913730 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.929688 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 17:20:45 crc kubenswrapper[5001]: I0128 17:20:45.945896 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.060090 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.114073 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.198355 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.243968 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.310956 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.322590 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.373350 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.488997 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.608104 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.648191 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.813772 5001 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.814120 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d" gracePeriod=5 Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.843271 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.902255 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 17:20:46 crc kubenswrapper[5001]: I0128 17:20:46.939697 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.019318 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.036283 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.042203 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.088060 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.103276 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.114927 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.130816 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.167148 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.227410 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.373474 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.458717 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.478573 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.509528 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.621499 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.628262 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.632133 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.670783 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.685325 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.695433 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.731727 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.741494 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.895330 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 17:20:47 crc kubenswrapper[5001]: I0128 17:20:47.957310 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.001744 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.088884 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.103993 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.301118 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.361379 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.377279 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.621452 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.673397 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.682326 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.716365 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.785066 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.891543 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.897492 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 17:20:48 crc kubenswrapper[5001]: I0128 17:20:48.936731 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.022892 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.042792 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.065543 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.235727 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.363045 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.468549 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.632413 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.644997 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.672965 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 17:20:49 crc kubenswrapper[5001]: I0128 17:20:49.973250 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 17:20:50 crc kubenswrapper[5001]: I0128 17:20:50.104647 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.377733 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.378041 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409581 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409667 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409691 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409769 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409793 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409813 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.409838 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.410053 5001 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.410064 5001 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.410120 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.410151 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.419198 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.511615 5001 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.511658 5001 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.511675 5001 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.601865 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.602166 5001 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.610379 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.610425 5001 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bb089d93-2d8a-4a49-aa76-7e9ec4a336dc" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.614211 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.614240 5001 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bb089d93-2d8a-4a49-aa76-7e9ec4a336dc" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.663776 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.663828 5001 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d" exitCode=137 Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.663874 5001 scope.go:117] "RemoveContainer" containerID="1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.663930 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.680182 5001 scope.go:117] "RemoveContainer" containerID="1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d" Jan 28 17:20:52 crc kubenswrapper[5001]: E0128 17:20:52.680673 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d\": container with ID starting with 1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d not found: ID does not exist" containerID="1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d" Jan 28 17:20:52 crc kubenswrapper[5001]: I0128 17:20:52.680724 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d"} err="failed to get container status \"1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d\": rpc error: code = NotFound desc = could not find container \"1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d\": container with ID starting with 1421d899df9838e9e1bc2ad4534f06bcde81fb151e8d527a9c77c8ab40df962d not found: ID does not exist" Jan 28 17:20:54 crc kubenswrapper[5001]: I0128 17:20:54.379263 5001 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 17:20:55 crc kubenswrapper[5001]: I0128 17:20:55.594539 5001 scope.go:117] "RemoveContainer" containerID="2a851e1d96e1a5e91bca0535f5a2f0a9739778fe966e40cb4d3c5e00947df55f" Jan 28 17:20:55 crc kubenswrapper[5001]: E0128 17:20:55.595033 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-5cc5b65bd-7bzt4_openshift-authentication(716c725e-e0aa-455a-a6f3-c5d488403f4e)\"" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podUID="716c725e-e0aa-455a-a6f3-c5d488403f4e" Jan 28 17:21:02 crc kubenswrapper[5001]: I0128 17:21:02.720571 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq"] Jan 28 17:21:02 crc kubenswrapper[5001]: I0128 17:21:02.721201 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" podUID="e11d73c8-86be-4813-b947-2da20c575510" containerName="controller-manager" containerID="cri-o://e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e" gracePeriod=30 Jan 28 17:21:02 crc kubenswrapper[5001]: I0128 17:21:02.727131 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn"] Jan 28 17:21:02 crc kubenswrapper[5001]: I0128 17:21:02.727657 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" podUID="c70f3e59-ff36-4184-b115-f429c9574f51" containerName="route-controller-manager" containerID="cri-o://6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f" gracePeriod=30 Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.096385 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.158692 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8z6w\" (UniqueName: \"kubernetes.io/projected/c70f3e59-ff36-4184-b115-f429c9574f51-kube-api-access-d8z6w\") pod \"c70f3e59-ff36-4184-b115-f429c9574f51\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.158841 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-client-ca\") pod \"c70f3e59-ff36-4184-b115-f429c9574f51\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.158920 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-config\") pod \"c70f3e59-ff36-4184-b115-f429c9574f51\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.158960 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70f3e59-ff36-4184-b115-f429c9574f51-serving-cert\") pod \"c70f3e59-ff36-4184-b115-f429c9574f51\" (UID: \"c70f3e59-ff36-4184-b115-f429c9574f51\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.159631 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-config" (OuterVolumeSpecName: "config") pod "c70f3e59-ff36-4184-b115-f429c9574f51" (UID: "c70f3e59-ff36-4184-b115-f429c9574f51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.159734 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-client-ca" (OuterVolumeSpecName: "client-ca") pod "c70f3e59-ff36-4184-b115-f429c9574f51" (UID: "c70f3e59-ff36-4184-b115-f429c9574f51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.160157 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.160204 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c70f3e59-ff36-4184-b115-f429c9574f51-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.164015 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c70f3e59-ff36-4184-b115-f429c9574f51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c70f3e59-ff36-4184-b115-f429c9574f51" (UID: "c70f3e59-ff36-4184-b115-f429c9574f51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.164142 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70f3e59-ff36-4184-b115-f429c9574f51-kube-api-access-d8z6w" (OuterVolumeSpecName: "kube-api-access-d8z6w") pod "c70f3e59-ff36-4184-b115-f429c9574f51" (UID: "c70f3e59-ff36-4184-b115-f429c9574f51"). InnerVolumeSpecName "kube-api-access-d8z6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.172370 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.260789 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-config\") pod \"e11d73c8-86be-4813-b947-2da20c575510\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.260885 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-client-ca\") pod \"e11d73c8-86be-4813-b947-2da20c575510\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.260935 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-proxy-ca-bundles\") pod \"e11d73c8-86be-4813-b947-2da20c575510\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.260956 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11d73c8-86be-4813-b947-2da20c575510-serving-cert\") pod \"e11d73c8-86be-4813-b947-2da20c575510\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.261122 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f8hv\" (UniqueName: \"kubernetes.io/projected/e11d73c8-86be-4813-b947-2da20c575510-kube-api-access-9f8hv\") pod \"e11d73c8-86be-4813-b947-2da20c575510\" (UID: \"e11d73c8-86be-4813-b947-2da20c575510\") " Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.261424 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8z6w\" (UniqueName: \"kubernetes.io/projected/c70f3e59-ff36-4184-b115-f429c9574f51-kube-api-access-d8z6w\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.261453 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c70f3e59-ff36-4184-b115-f429c9574f51-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.261836 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e11d73c8-86be-4813-b947-2da20c575510" (UID: "e11d73c8-86be-4813-b947-2da20c575510"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.261932 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-client-ca" (OuterVolumeSpecName: "client-ca") pod "e11d73c8-86be-4813-b947-2da20c575510" (UID: "e11d73c8-86be-4813-b947-2da20c575510"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.262105 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-config" (OuterVolumeSpecName: "config") pod "e11d73c8-86be-4813-b947-2da20c575510" (UID: "e11d73c8-86be-4813-b947-2da20c575510"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.264169 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11d73c8-86be-4813-b947-2da20c575510-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e11d73c8-86be-4813-b947-2da20c575510" (UID: "e11d73c8-86be-4813-b947-2da20c575510"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.264736 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11d73c8-86be-4813-b947-2da20c575510-kube-api-access-9f8hv" (OuterVolumeSpecName: "kube-api-access-9f8hv") pod "e11d73c8-86be-4813-b947-2da20c575510" (UID: "e11d73c8-86be-4813-b947-2da20c575510"). InnerVolumeSpecName "kube-api-access-9f8hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.362475 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.362520 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.362534 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11d73c8-86be-4813-b947-2da20c575510-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.362548 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f8hv\" (UniqueName: \"kubernetes.io/projected/e11d73c8-86be-4813-b947-2da20c575510-kube-api-access-9f8hv\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.362560 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d73c8-86be-4813-b947-2da20c575510-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.737034 5001 generic.go:334] "Generic (PLEG): container finished" podID="e11d73c8-86be-4813-b947-2da20c575510" containerID="e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e" exitCode=0 Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.737144 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" event={"ID":"e11d73c8-86be-4813-b947-2da20c575510","Type":"ContainerDied","Data":"e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e"} Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.737242 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" event={"ID":"e11d73c8-86be-4813-b947-2da20c575510","Type":"ContainerDied","Data":"aed43f2b0b3c64a10d1d75eae36b0999604e86ad0ca10852891e3c8e210cb96f"} Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.737266 5001 scope.go:117] "RemoveContainer" containerID="e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.737529 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.739173 5001 generic.go:334] "Generic (PLEG): container finished" podID="c70f3e59-ff36-4184-b115-f429c9574f51" containerID="6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f" exitCode=0 Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.739224 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" event={"ID":"c70f3e59-ff36-4184-b115-f429c9574f51","Type":"ContainerDied","Data":"6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f"} Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.739271 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" event={"ID":"c70f3e59-ff36-4184-b115-f429c9574f51","Type":"ContainerDied","Data":"c7be28de708ffd0781b51ba60cde311a0087019aa6a413f3a0d2af3b5989557c"} Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.739336 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.753154 5001 scope.go:117] "RemoveContainer" containerID="e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e" Jan 28 17:21:03 crc kubenswrapper[5001]: E0128 17:21:03.753893 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e\": container with ID starting with e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e not found: ID does not exist" containerID="e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.753942 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e"} err="failed to get container status \"e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e\": rpc error: code = NotFound desc = could not find container \"e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e\": container with ID starting with e8c76dcf210ff1e1a448d1119400254f69111a5b574f6e932caefa5a4c495e9e not found: ID does not exist" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.753968 5001 scope.go:117] "RemoveContainer" containerID="6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.768256 5001 scope.go:117] "RemoveContainer" containerID="6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f" Jan 28 17:21:03 crc kubenswrapper[5001]: E0128 17:21:03.768737 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f\": container with ID starting with 6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f not found: ID does not exist" containerID="6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.768797 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f"} err="failed to get container status \"6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f\": rpc error: code = NotFound desc = could not find container \"6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f\": container with ID starting with 6515d3eb32e8f83ee0f02309373e1d8cb64cf13c803dbfefda2f34dd34da481f not found: ID does not exist" Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.773312 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq"] Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.779440 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76dbcd8bd5-n57xq"] Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.784386 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn"] Jan 28 17:21:03 crc kubenswrapper[5001]: I0128 17:21:03.788769 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-899449c8d-pc6nn"] Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.601998 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70f3e59-ff36-4184-b115-f429c9574f51" path="/var/lib/kubelet/pods/c70f3e59-ff36-4184-b115-f429c9574f51/volumes" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.602614 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11d73c8-86be-4813-b947-2da20c575510" path="/var/lib/kubelet/pods/e11d73c8-86be-4813-b947-2da20c575510/volumes" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.720935 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-gthvk"] Jan 28 17:21:04 crc kubenswrapper[5001]: E0128 17:21:04.721595 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721650 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 17:21:04 crc kubenswrapper[5001]: E0128 17:21:04.721696 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70f3e59-ff36-4184-b115-f429c9574f51" containerName="route-controller-manager" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721709 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70f3e59-ff36-4184-b115-f429c9574f51" containerName="route-controller-manager" Jan 28 17:21:04 crc kubenswrapper[5001]: E0128 17:21:04.721750 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" containerName="installer" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721760 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" containerName="installer" Jan 28 17:21:04 crc kubenswrapper[5001]: E0128 17:21:04.721776 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11d73c8-86be-4813-b947-2da20c575510" containerName="controller-manager" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721789 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11d73c8-86be-4813-b947-2da20c575510" containerName="controller-manager" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721907 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70f3e59-ff36-4184-b115-f429c9574f51" containerName="route-controller-manager" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721927 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7075bc2e-15dc-4bbc-a5d7-f77c163576fa" containerName="installer" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721938 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.721952 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11d73c8-86be-4813-b947-2da20c575510" containerName="controller-manager" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.722449 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.724522 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r"] Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.725245 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.725474 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730039 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730120 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730203 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730242 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730287 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730346 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.730797 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.731477 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.742845 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.743593 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.744375 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-gthvk"] Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.745020 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.756192 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.765500 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r"] Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781006 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-config\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781063 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06c774e-b853-4524-a29c-96bfefb8dd4f-serving-cert\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781092 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-proxy-ca-bundles\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781118 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-serving-cert\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781172 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-client-ca\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781207 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qb4\" (UniqueName: \"kubernetes.io/projected/c06c774e-b853-4524-a29c-96bfefb8dd4f-kube-api-access-87qb4\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781261 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-client-ca\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781282 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x4x\" (UniqueName: \"kubernetes.io/projected/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-kube-api-access-97x4x\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.781408 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-config\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.882732 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-proxy-ca-bundles\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.882805 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-serving-cert\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.882841 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-client-ca\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.882892 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qb4\" (UniqueName: \"kubernetes.io/projected/c06c774e-b853-4524-a29c-96bfefb8dd4f-kube-api-access-87qb4\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.882923 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x4x\" (UniqueName: \"kubernetes.io/projected/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-kube-api-access-97x4x\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.883296 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-client-ca\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884056 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-client-ca\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884056 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-proxy-ca-bundles\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884109 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-config\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884165 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-config\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884189 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06c774e-b853-4524-a29c-96bfefb8dd4f-serving-cert\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884521 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-config\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884910 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-client-ca\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.884955 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-config\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.889872 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-serving-cert\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.890036 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06c774e-b853-4524-a29c-96bfefb8dd4f-serving-cert\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.899196 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x4x\" (UniqueName: \"kubernetes.io/projected/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-kube-api-access-97x4x\") pod \"route-controller-manager-6bc9b688db-zqm6r\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:04 crc kubenswrapper[5001]: I0128 17:21:04.906388 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qb4\" (UniqueName: \"kubernetes.io/projected/c06c774e-b853-4524-a29c-96bfefb8dd4f-kube-api-access-87qb4\") pod \"controller-manager-55d4c5b566-gthvk\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:05 crc kubenswrapper[5001]: I0128 17:21:05.057999 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:05 crc kubenswrapper[5001]: I0128 17:21:05.070211 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:05 crc kubenswrapper[5001]: I0128 17:21:05.452794 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-gthvk"] Jan 28 17:21:05 crc kubenswrapper[5001]: I0128 17:21:05.492926 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r"] Jan 28 17:21:05 crc kubenswrapper[5001]: W0128 17:21:05.498256 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda00ec5cf_b1e4_45a1_b2c9_699419357b5a.slice/crio-3a782aace1e58ed45447db5bab5103ef823d068922845ee7f9ae938723bfdc27 WatchSource:0}: Error finding container 3a782aace1e58ed45447db5bab5103ef823d068922845ee7f9ae938723bfdc27: Status 404 returned error can't find the container with id 3a782aace1e58ed45447db5bab5103ef823d068922845ee7f9ae938723bfdc27 Jan 28 17:21:05 crc kubenswrapper[5001]: I0128 17:21:05.772835 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" event={"ID":"c06c774e-b853-4524-a29c-96bfefb8dd4f","Type":"ContainerStarted","Data":"49fdcdf51e92e973e1872b926675a633377652d6e32de6e919e4c849de5cd07a"} Jan 28 17:21:05 crc kubenswrapper[5001]: I0128 17:21:05.773767 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" event={"ID":"a00ec5cf-b1e4-45a1-b2c9-699419357b5a","Type":"ContainerStarted","Data":"3a782aace1e58ed45447db5bab5103ef823d068922845ee7f9ae938723bfdc27"} Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.781467 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" event={"ID":"c06c774e-b853-4524-a29c-96bfefb8dd4f","Type":"ContainerStarted","Data":"393239acb054de56c9a832b6b3fd2209a279b8895f84bf05797dc8fa13010c3a"} Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.781838 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.783762 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" event={"ID":"a00ec5cf-b1e4-45a1-b2c9-699419357b5a","Type":"ContainerStarted","Data":"64e30f8d48ab4d59e78d6eb0f103a43ec71d62d8e9b45f00c180e9408c034ff6"} Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.784002 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.786174 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.789426 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.804321 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" podStartSLOduration=4.804300731 podStartE2EDuration="4.804300731s" podCreationTimestamp="2026-01-28 17:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:06.804192598 +0000 UTC m=+312.971980838" watchObservedRunningTime="2026-01-28 17:21:06.804300731 +0000 UTC m=+312.972088961" Jan 28 17:21:06 crc kubenswrapper[5001]: I0128 17:21:06.820244 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" podStartSLOduration=4.82022355 podStartE2EDuration="4.82022355s" podCreationTimestamp="2026-01-28 17:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:06.818431208 +0000 UTC m=+312.986219438" watchObservedRunningTime="2026-01-28 17:21:06.82022355 +0000 UTC m=+312.988011780" Jan 28 17:21:08 crc kubenswrapper[5001]: I0128 17:21:08.269799 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 17:21:09 crc kubenswrapper[5001]: I0128 17:21:09.593769 5001 scope.go:117] "RemoveContainer" containerID="2a851e1d96e1a5e91bca0535f5a2f0a9739778fe966e40cb4d3c5e00947df55f" Jan 28 17:21:10 crc kubenswrapper[5001]: I0128 17:21:10.805447 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5cc5b65bd-7bzt4_716c725e-e0aa-455a-a6f3-c5d488403f4e/oauth-openshift/2.log" Jan 28 17:21:10 crc kubenswrapper[5001]: I0128 17:21:10.805769 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" event={"ID":"716c725e-e0aa-455a-a6f3-c5d488403f4e","Type":"ContainerStarted","Data":"f3ecfb3e6ae948a1f607b9013e645adf285389a6ac06478fc75bfcc21ec686ba"} Jan 28 17:21:10 crc kubenswrapper[5001]: I0128 17:21:10.806036 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:21:10 crc kubenswrapper[5001]: I0128 17:21:10.810636 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" Jan 28 17:21:10 crc kubenswrapper[5001]: I0128 17:21:10.831035 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cc5b65bd-7bzt4" podStartSLOduration=103.831019063 podStartE2EDuration="1m43.831019063s" podCreationTimestamp="2026-01-28 17:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:24.534778784 +0000 UTC m=+270.702567014" watchObservedRunningTime="2026-01-28 17:21:10.831019063 +0000 UTC m=+316.998807323" Jan 28 17:21:11 crc kubenswrapper[5001]: I0128 17:21:11.126393 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 17:21:15 crc kubenswrapper[5001]: I0128 17:21:15.756606 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwlzb"] Jan 28 17:21:15 crc kubenswrapper[5001]: I0128 17:21:15.833086 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwlzb" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="registry-server" containerID="cri-o://412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134" gracePeriod=2 Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.323842 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.448944 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-utilities\") pod \"a7567e81-456f-4076-9d78-84e85d057dd4\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.449123 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk7w8\" (UniqueName: \"kubernetes.io/projected/a7567e81-456f-4076-9d78-84e85d057dd4-kube-api-access-gk7w8\") pod \"a7567e81-456f-4076-9d78-84e85d057dd4\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.449196 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-catalog-content\") pod \"a7567e81-456f-4076-9d78-84e85d057dd4\" (UID: \"a7567e81-456f-4076-9d78-84e85d057dd4\") " Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.449956 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-utilities" (OuterVolumeSpecName: "utilities") pod "a7567e81-456f-4076-9d78-84e85d057dd4" (UID: "a7567e81-456f-4076-9d78-84e85d057dd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.455598 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7567e81-456f-4076-9d78-84e85d057dd4-kube-api-access-gk7w8" (OuterVolumeSpecName: "kube-api-access-gk7w8") pod "a7567e81-456f-4076-9d78-84e85d057dd4" (UID: "a7567e81-456f-4076-9d78-84e85d057dd4"). InnerVolumeSpecName "kube-api-access-gk7w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.470578 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7567e81-456f-4076-9d78-84e85d057dd4" (UID: "a7567e81-456f-4076-9d78-84e85d057dd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.550501 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.550539 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk7w8\" (UniqueName: \"kubernetes.io/projected/a7567e81-456f-4076-9d78-84e85d057dd4-kube-api-access-gk7w8\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.550550 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7567e81-456f-4076-9d78-84e85d057dd4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.840612 5001 generic.go:334] "Generic (PLEG): container finished" podID="a7567e81-456f-4076-9d78-84e85d057dd4" containerID="412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134" exitCode=0 Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.840677 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwlzb" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.840687 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerDied","Data":"412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134"} Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.840773 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwlzb" event={"ID":"a7567e81-456f-4076-9d78-84e85d057dd4","Type":"ContainerDied","Data":"8a404721762399366ea1dbab58c0e1e43a12a469877c21e4b1a71ea16869486f"} Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.840801 5001 scope.go:117] "RemoveContainer" containerID="412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.862454 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwlzb"] Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.867352 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwlzb"] Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.867647 5001 scope.go:117] "RemoveContainer" containerID="84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.884590 5001 scope.go:117] "RemoveContainer" containerID="c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.901326 5001 scope.go:117] "RemoveContainer" containerID="412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134" Jan 28 17:21:16 crc kubenswrapper[5001]: E0128 17:21:16.901827 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134\": container with ID starting with 412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134 not found: ID does not exist" containerID="412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.901873 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134"} err="failed to get container status \"412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134\": rpc error: code = NotFound desc = could not find container \"412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134\": container with ID starting with 412c8cca5fa0bc67baa3501c8e898331ce77d27c4613c6e4e1b5f113e9543134 not found: ID does not exist" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.901904 5001 scope.go:117] "RemoveContainer" containerID="84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c" Jan 28 17:21:16 crc kubenswrapper[5001]: E0128 17:21:16.902160 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c\": container with ID starting with 84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c not found: ID does not exist" containerID="84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.902183 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c"} err="failed to get container status \"84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c\": rpc error: code = NotFound desc = could not find container \"84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c\": container with ID starting with 84a55307f487b25a1652c5ad9fb1a83aa740c5ef696667663af86570b9d7c14c not found: ID does not exist" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.902200 5001 scope.go:117] "RemoveContainer" containerID="c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25" Jan 28 17:21:16 crc kubenswrapper[5001]: E0128 17:21:16.902472 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25\": container with ID starting with c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25 not found: ID does not exist" containerID="c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25" Jan 28 17:21:16 crc kubenswrapper[5001]: I0128 17:21:16.902523 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25"} err="failed to get container status \"c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25\": rpc error: code = NotFound desc = could not find container \"c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25\": container with ID starting with c01d0de7b602173587b58766e2933395e961075eac167facf20104b1384cac25 not found: ID does not exist" Jan 28 17:21:18 crc kubenswrapper[5001]: I0128 17:21:18.599808 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" path="/var/lib/kubelet/pods/a7567e81-456f-4076-9d78-84e85d057dd4/volumes" Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.707455 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-gthvk"] Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.708117 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" podUID="c06c774e-b853-4524-a29c-96bfefb8dd4f" containerName="controller-manager" containerID="cri-o://393239acb054de56c9a832b6b3fd2209a279b8895f84bf05797dc8fa13010c3a" gracePeriod=30 Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.725586 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r"] Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.725823 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" podUID="a00ec5cf-b1e4-45a1-b2c9-699419357b5a" containerName="route-controller-manager" containerID="cri-o://64e30f8d48ab4d59e78d6eb0f103a43ec71d62d8e9b45f00c180e9408c034ff6" gracePeriod=30 Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.874055 5001 generic.go:334] "Generic (PLEG): container finished" podID="a00ec5cf-b1e4-45a1-b2c9-699419357b5a" containerID="64e30f8d48ab4d59e78d6eb0f103a43ec71d62d8e9b45f00c180e9408c034ff6" exitCode=0 Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.874173 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" event={"ID":"a00ec5cf-b1e4-45a1-b2c9-699419357b5a","Type":"ContainerDied","Data":"64e30f8d48ab4d59e78d6eb0f103a43ec71d62d8e9b45f00c180e9408c034ff6"} Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.878245 5001 generic.go:334] "Generic (PLEG): container finished" podID="c06c774e-b853-4524-a29c-96bfefb8dd4f" containerID="393239acb054de56c9a832b6b3fd2209a279b8895f84bf05797dc8fa13010c3a" exitCode=0 Jan 28 17:21:22 crc kubenswrapper[5001]: I0128 17:21:22.878281 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" event={"ID":"c06c774e-b853-4524-a29c-96bfefb8dd4f","Type":"ContainerDied","Data":"393239acb054de56c9a832b6b3fd2209a279b8895f84bf05797dc8fa13010c3a"} Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.255681 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.339334 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-client-ca\") pod \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.339410 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-config\") pod \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.339483 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97x4x\" (UniqueName: \"kubernetes.io/projected/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-kube-api-access-97x4x\") pod \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.339654 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-serving-cert\") pod \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\" (UID: \"a00ec5cf-b1e4-45a1-b2c9-699419357b5a\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.340333 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a00ec5cf-b1e4-45a1-b2c9-699419357b5a" (UID: "a00ec5cf-b1e4-45a1-b2c9-699419357b5a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.340822 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-config" (OuterVolumeSpecName: "config") pod "a00ec5cf-b1e4-45a1-b2c9-699419357b5a" (UID: "a00ec5cf-b1e4-45a1-b2c9-699419357b5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.345081 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-kube-api-access-97x4x" (OuterVolumeSpecName: "kube-api-access-97x4x") pod "a00ec5cf-b1e4-45a1-b2c9-699419357b5a" (UID: "a00ec5cf-b1e4-45a1-b2c9-699419357b5a"). InnerVolumeSpecName "kube-api-access-97x4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.345312 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a00ec5cf-b1e4-45a1-b2c9-699419357b5a" (UID: "a00ec5cf-b1e4-45a1-b2c9-699419357b5a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.370685 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441310 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-client-ca\") pod \"c06c774e-b853-4524-a29c-96bfefb8dd4f\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441408 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-config\") pod \"c06c774e-b853-4524-a29c-96bfefb8dd4f\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441474 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06c774e-b853-4524-a29c-96bfefb8dd4f-serving-cert\") pod \"c06c774e-b853-4524-a29c-96bfefb8dd4f\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441527 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87qb4\" (UniqueName: \"kubernetes.io/projected/c06c774e-b853-4524-a29c-96bfefb8dd4f-kube-api-access-87qb4\") pod \"c06c774e-b853-4524-a29c-96bfefb8dd4f\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441579 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-proxy-ca-bundles\") pod \"c06c774e-b853-4524-a29c-96bfefb8dd4f\" (UID: \"c06c774e-b853-4524-a29c-96bfefb8dd4f\") " Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441796 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441821 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441833 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.441843 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97x4x\" (UniqueName: \"kubernetes.io/projected/a00ec5cf-b1e4-45a1-b2c9-699419357b5a-kube-api-access-97x4x\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.442291 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c06c774e-b853-4524-a29c-96bfefb8dd4f" (UID: "c06c774e-b853-4524-a29c-96bfefb8dd4f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.442309 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c06c774e-b853-4524-a29c-96bfefb8dd4f" (UID: "c06c774e-b853-4524-a29c-96bfefb8dd4f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.442718 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-config" (OuterVolumeSpecName: "config") pod "c06c774e-b853-4524-a29c-96bfefb8dd4f" (UID: "c06c774e-b853-4524-a29c-96bfefb8dd4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.444728 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06c774e-b853-4524-a29c-96bfefb8dd4f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c06c774e-b853-4524-a29c-96bfefb8dd4f" (UID: "c06c774e-b853-4524-a29c-96bfefb8dd4f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.445251 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06c774e-b853-4524-a29c-96bfefb8dd4f-kube-api-access-87qb4" (OuterVolumeSpecName: "kube-api-access-87qb4") pod "c06c774e-b853-4524-a29c-96bfefb8dd4f" (UID: "c06c774e-b853-4524-a29c-96bfefb8dd4f"). InnerVolumeSpecName "kube-api-access-87qb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.542874 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.542919 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.542932 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06c774e-b853-4524-a29c-96bfefb8dd4f-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.542942 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c06c774e-b853-4524-a29c-96bfefb8dd4f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.542954 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87qb4\" (UniqueName: \"kubernetes.io/projected/c06c774e-b853-4524-a29c-96bfefb8dd4f-kube-api-access-87qb4\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.747584 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59f675d9d7-zgm46"] Jan 28 17:21:23 crc kubenswrapper[5001]: E0128 17:21:23.749093 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="extract-utilities" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.749210 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="extract-utilities" Jan 28 17:21:23 crc kubenswrapper[5001]: E0128 17:21:23.749316 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="extract-content" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.749426 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="extract-content" Jan 28 17:21:23 crc kubenswrapper[5001]: E0128 17:21:23.749523 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00ec5cf-b1e4-45a1-b2c9-699419357b5a" containerName="route-controller-manager" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.749612 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00ec5cf-b1e4-45a1-b2c9-699419357b5a" containerName="route-controller-manager" Jan 28 17:21:23 crc kubenswrapper[5001]: E0128 17:21:23.749706 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="registry-server" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.749793 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="registry-server" Jan 28 17:21:23 crc kubenswrapper[5001]: E0128 17:21:23.749871 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06c774e-b853-4524-a29c-96bfefb8dd4f" containerName="controller-manager" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.749952 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06c774e-b853-4524-a29c-96bfefb8dd4f" containerName="controller-manager" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.750126 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06c774e-b853-4524-a29c-96bfefb8dd4f" containerName="controller-manager" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.750210 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7567e81-456f-4076-9d78-84e85d057dd4" containerName="registry-server" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.750316 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00ec5cf-b1e4-45a1-b2c9-699419357b5a" containerName="route-controller-manager" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.750759 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.758335 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59f675d9d7-zgm46"] Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.846069 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-config\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.846160 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-serving-cert\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.846195 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-client-ca\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.846374 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mt9\" (UniqueName: \"kubernetes.io/projected/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-kube-api-access-v4mt9\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.846435 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-proxy-ca-bundles\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.886718 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" event={"ID":"c06c774e-b853-4524-a29c-96bfefb8dd4f","Type":"ContainerDied","Data":"49fdcdf51e92e973e1872b926675a633377652d6e32de6e919e4c849de5cd07a"} Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.886767 5001 scope.go:117] "RemoveContainer" containerID="393239acb054de56c9a832b6b3fd2209a279b8895f84bf05797dc8fa13010c3a" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.887048 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55d4c5b566-gthvk" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.897368 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" event={"ID":"a00ec5cf-b1e4-45a1-b2c9-699419357b5a","Type":"ContainerDied","Data":"3a782aace1e58ed45447db5bab5103ef823d068922845ee7f9ae938723bfdc27"} Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.897505 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.914262 5001 scope.go:117] "RemoveContainer" containerID="64e30f8d48ab4d59e78d6eb0f103a43ec71d62d8e9b45f00c180e9408c034ff6" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.921802 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-gthvk"] Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.925305 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-gthvk"] Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.930679 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r"] Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.934072 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-zqm6r"] Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.948289 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mt9\" (UniqueName: \"kubernetes.io/projected/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-kube-api-access-v4mt9\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.948341 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-proxy-ca-bundles\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.948395 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-config\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.948440 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-serving-cert\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.948465 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-client-ca\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.949465 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-client-ca\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.949875 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-config\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.950861 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-proxy-ca-bundles\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.952903 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-serving-cert\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:23 crc kubenswrapper[5001]: I0128 17:21:23.964701 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mt9\" (UniqueName: \"kubernetes.io/projected/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-kube-api-access-v4mt9\") pod \"controller-manager-59f675d9d7-zgm46\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.069585 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.483148 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59f675d9d7-zgm46"] Jan 28 17:21:24 crc kubenswrapper[5001]: W0128 17:21:24.485844 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c09413b_2c61_4fe0_b39e_c23fcf5e5034.slice/crio-1d995bd133dc31d25e4c61463f9f765542d9291155d9e736a2e9916bec72f0e8 WatchSource:0}: Error finding container 1d995bd133dc31d25e4c61463f9f765542d9291155d9e736a2e9916bec72f0e8: Status 404 returned error can't find the container with id 1d995bd133dc31d25e4c61463f9f765542d9291155d9e736a2e9916bec72f0e8 Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.604290 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00ec5cf-b1e4-45a1-b2c9-699419357b5a" path="/var/lib/kubelet/pods/a00ec5cf-b1e4-45a1-b2c9-699419357b5a/volumes" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.605147 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06c774e-b853-4524-a29c-96bfefb8dd4f" path="/var/lib/kubelet/pods/c06c774e-b853-4524-a29c-96bfefb8dd4f/volumes" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.748714 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q"] Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.749454 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.753009 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.753086 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.753181 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.754958 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.755288 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.755516 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.759690 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q"] Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.864692 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbsk\" (UniqueName: \"kubernetes.io/projected/6156b50d-0bb8-4e51-823f-24e4339d3f60-kube-api-access-lsbsk\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.865492 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-client-ca\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.865647 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-config\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.866359 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6156b50d-0bb8-4e51-823f-24e4339d3f60-serving-cert\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.905122 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" event={"ID":"3c09413b-2c61-4fe0-b39e-c23fcf5e5034","Type":"ContainerStarted","Data":"e97253145ecd5902f3b76938ad4f6ee9578816667c637302113398c4b6b740e0"} Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.905176 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" event={"ID":"3c09413b-2c61-4fe0-b39e-c23fcf5e5034","Type":"ContainerStarted","Data":"1d995bd133dc31d25e4c61463f9f765542d9291155d9e736a2e9916bec72f0e8"} Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.905506 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.909063 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.931774 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" podStartSLOduration=2.931748328 podStartE2EDuration="2.931748328s" podCreationTimestamp="2026-01-28 17:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:24.923801501 +0000 UTC m=+331.091589731" watchObservedRunningTime="2026-01-28 17:21:24.931748328 +0000 UTC m=+331.099536558" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.967905 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6156b50d-0bb8-4e51-823f-24e4339d3f60-serving-cert\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.967992 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbsk\" (UniqueName: \"kubernetes.io/projected/6156b50d-0bb8-4e51-823f-24e4339d3f60-kube-api-access-lsbsk\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.968046 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-client-ca\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.968070 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-config\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.969840 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-config\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.970208 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-client-ca\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.976331 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6156b50d-0bb8-4e51-823f-24e4339d3f60-serving-cert\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:24 crc kubenswrapper[5001]: I0128 17:21:24.992349 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbsk\" (UniqueName: \"kubernetes.io/projected/6156b50d-0bb8-4e51-823f-24e4339d3f60-kube-api-access-lsbsk\") pod \"route-controller-manager-7d59bd9699-8pq6q\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:25 crc kubenswrapper[5001]: I0128 17:21:25.072109 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:25 crc kubenswrapper[5001]: I0128 17:21:25.498586 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q"] Jan 28 17:21:25 crc kubenswrapper[5001]: W0128 17:21:25.501871 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6156b50d_0bb8_4e51_823f_24e4339d3f60.slice/crio-b3702e6d7d85f5acd20471fe8380dea68db605632cc2bca5564d75efa3945233 WatchSource:0}: Error finding container b3702e6d7d85f5acd20471fe8380dea68db605632cc2bca5564d75efa3945233: Status 404 returned error can't find the container with id b3702e6d7d85f5acd20471fe8380dea68db605632cc2bca5564d75efa3945233 Jan 28 17:21:25 crc kubenswrapper[5001]: I0128 17:21:25.928679 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" event={"ID":"6156b50d-0bb8-4e51-823f-24e4339d3f60","Type":"ContainerStarted","Data":"8f34b697e297899c43ab7cd0b9a3935653c168de26e333e0612a210eebefddeb"} Jan 28 17:21:25 crc kubenswrapper[5001]: I0128 17:21:25.928729 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" event={"ID":"6156b50d-0bb8-4e51-823f-24e4339d3f60","Type":"ContainerStarted","Data":"b3702e6d7d85f5acd20471fe8380dea68db605632cc2bca5564d75efa3945233"} Jan 28 17:21:25 crc kubenswrapper[5001]: I0128 17:21:25.929038 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:25 crc kubenswrapper[5001]: I0128 17:21:25.954834 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" podStartSLOduration=3.954817415 podStartE2EDuration="3.954817415s" podCreationTimestamp="2026-01-28 17:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:25.952425257 +0000 UTC m=+332.120213487" watchObservedRunningTime="2026-01-28 17:21:25.954817415 +0000 UTC m=+332.122605645" Jan 28 17:21:26 crc kubenswrapper[5001]: I0128 17:21:26.042409 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:21:26 crc kubenswrapper[5001]: I0128 17:21:26.494182 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.767618 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r8qqd"] Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.768932 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.780182 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r8qqd"] Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921280 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921368 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8198a660-8624-41b2-bf4a-850f096c4633-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921448 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-registry-tls\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921561 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8198a660-8624-41b2-bf4a-850f096c4633-registry-certificates\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921645 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvx5\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-kube-api-access-nwvx5\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921723 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8198a660-8624-41b2-bf4a-850f096c4633-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921761 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8198a660-8624-41b2-bf4a-850f096c4633-trusted-ca\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.921907 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-bound-sa-token\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:00 crc kubenswrapper[5001]: I0128 17:22:00.945591 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.022993 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-bound-sa-token\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023072 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8198a660-8624-41b2-bf4a-850f096c4633-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023111 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-registry-tls\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023135 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8198a660-8624-41b2-bf4a-850f096c4633-registry-certificates\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023163 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwvx5\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-kube-api-access-nwvx5\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023185 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8198a660-8624-41b2-bf4a-850f096c4633-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023202 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8198a660-8624-41b2-bf4a-850f096c4633-trusted-ca\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.023633 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8198a660-8624-41b2-bf4a-850f096c4633-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.024958 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8198a660-8624-41b2-bf4a-850f096c4633-trusted-ca\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.025308 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8198a660-8624-41b2-bf4a-850f096c4633-registry-certificates\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.030680 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-registry-tls\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.032724 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8198a660-8624-41b2-bf4a-850f096c4633-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.038733 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-bound-sa-token\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.043375 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwvx5\" (UniqueName: \"kubernetes.io/projected/8198a660-8624-41b2-bf4a-850f096c4633-kube-api-access-nwvx5\") pod \"image-registry-66df7c8f76-r8qqd\" (UID: \"8198a660-8624-41b2-bf4a-850f096c4633\") " pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.097443 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:01 crc kubenswrapper[5001]: I0128 17:22:01.515841 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r8qqd"] Jan 28 17:22:01 crc kubenswrapper[5001]: W0128 17:22:01.522296 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8198a660_8624_41b2_bf4a_850f096c4633.slice/crio-98092e8a0e09bcc9b0a6ebe9187eeaa77af5805dc2c67335f8451f5956a8b80d WatchSource:0}: Error finding container 98092e8a0e09bcc9b0a6ebe9187eeaa77af5805dc2c67335f8451f5956a8b80d: Status 404 returned error can't find the container with id 98092e8a0e09bcc9b0a6ebe9187eeaa77af5805dc2c67335f8451f5956a8b80d Jan 28 17:22:02 crc kubenswrapper[5001]: I0128 17:22:02.133897 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" event={"ID":"8198a660-8624-41b2-bf4a-850f096c4633","Type":"ContainerStarted","Data":"98092e8a0e09bcc9b0a6ebe9187eeaa77af5805dc2c67335f8451f5956a8b80d"} Jan 28 17:22:02 crc kubenswrapper[5001]: I0128 17:22:02.729916 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q"] Jan 28 17:22:02 crc kubenswrapper[5001]: I0128 17:22:02.730476 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" podUID="6156b50d-0bb8-4e51-823f-24e4339d3f60" containerName="route-controller-manager" containerID="cri-o://8f34b697e297899c43ab7cd0b9a3935653c168de26e333e0612a210eebefddeb" gracePeriod=30 Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.153080 5001 generic.go:334] "Generic (PLEG): container finished" podID="6156b50d-0bb8-4e51-823f-24e4339d3f60" containerID="8f34b697e297899c43ab7cd0b9a3935653c168de26e333e0612a210eebefddeb" exitCode=0 Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.153181 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" event={"ID":"6156b50d-0bb8-4e51-823f-24e4339d3f60","Type":"ContainerDied","Data":"8f34b697e297899c43ab7cd0b9a3935653c168de26e333e0612a210eebefddeb"} Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.156490 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" event={"ID":"8198a660-8624-41b2-bf4a-850f096c4633","Type":"ContainerStarted","Data":"94dc008e4d71fa4b4936299809c262e27f91a98b55bc13c23e2f1994e8572d3c"} Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.157117 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.182873 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" podStartSLOduration=3.182851155 podStartE2EDuration="3.182851155s" podCreationTimestamp="2026-01-28 17:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:03.18022633 +0000 UTC m=+369.348014580" watchObservedRunningTime="2026-01-28 17:22:03.182851155 +0000 UTC m=+369.350639385" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.893012 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.918611 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5"] Jan 28 17:22:03 crc kubenswrapper[5001]: E0128 17:22:03.918865 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6156b50d-0bb8-4e51-823f-24e4339d3f60" containerName="route-controller-manager" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.918885 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6156b50d-0bb8-4e51-823f-24e4339d3f60" containerName="route-controller-manager" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.919033 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6156b50d-0bb8-4e51-823f-24e4339d3f60" containerName="route-controller-manager" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.919526 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:03 crc kubenswrapper[5001]: I0128 17:22:03.930996 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5"] Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.064573 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6156b50d-0bb8-4e51-823f-24e4339d3f60-serving-cert\") pod \"6156b50d-0bb8-4e51-823f-24e4339d3f60\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.064904 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-config\") pod \"6156b50d-0bb8-4e51-823f-24e4339d3f60\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.064960 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-client-ca\") pod \"6156b50d-0bb8-4e51-823f-24e4339d3f60\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065054 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsbsk\" (UniqueName: \"kubernetes.io/projected/6156b50d-0bb8-4e51-823f-24e4339d3f60-kube-api-access-lsbsk\") pod \"6156b50d-0bb8-4e51-823f-24e4339d3f60\" (UID: \"6156b50d-0bb8-4e51-823f-24e4339d3f60\") " Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065206 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8a3b2b-5591-4490-9c73-be998b8ba644-client-ca\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065241 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8a3b2b-5591-4490-9c73-be998b8ba644-serving-cert\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065278 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxktd\" (UniqueName: \"kubernetes.io/projected/7a8a3b2b-5591-4490-9c73-be998b8ba644-kube-api-access-rxktd\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065308 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8a3b2b-5591-4490-9c73-be998b8ba644-config\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065617 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-client-ca" (OuterVolumeSpecName: "client-ca") pod "6156b50d-0bb8-4e51-823f-24e4339d3f60" (UID: "6156b50d-0bb8-4e51-823f-24e4339d3f60"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.065628 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-config" (OuterVolumeSpecName: "config") pod "6156b50d-0bb8-4e51-823f-24e4339d3f60" (UID: "6156b50d-0bb8-4e51-823f-24e4339d3f60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.071119 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6156b50d-0bb8-4e51-823f-24e4339d3f60-kube-api-access-lsbsk" (OuterVolumeSpecName: "kube-api-access-lsbsk") pod "6156b50d-0bb8-4e51-823f-24e4339d3f60" (UID: "6156b50d-0bb8-4e51-823f-24e4339d3f60"). InnerVolumeSpecName "kube-api-access-lsbsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.071231 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6156b50d-0bb8-4e51-823f-24e4339d3f60-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6156b50d-0bb8-4e51-823f-24e4339d3f60" (UID: "6156b50d-0bb8-4e51-823f-24e4339d3f60"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.163442 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" event={"ID":"6156b50d-0bb8-4e51-823f-24e4339d3f60","Type":"ContainerDied","Data":"b3702e6d7d85f5acd20471fe8380dea68db605632cc2bca5564d75efa3945233"} Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.163567 5001 scope.go:117] "RemoveContainer" containerID="8f34b697e297899c43ab7cd0b9a3935653c168de26e333e0612a210eebefddeb" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.163792 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.166914 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8a3b2b-5591-4490-9c73-be998b8ba644-client-ca\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.166996 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8a3b2b-5591-4490-9c73-be998b8ba644-serving-cert\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.167051 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxktd\" (UniqueName: \"kubernetes.io/projected/7a8a3b2b-5591-4490-9c73-be998b8ba644-kube-api-access-rxktd\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.167096 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8a3b2b-5591-4490-9c73-be998b8ba644-config\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.167151 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsbsk\" (UniqueName: \"kubernetes.io/projected/6156b50d-0bb8-4e51-823f-24e4339d3f60-kube-api-access-lsbsk\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.167172 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6156b50d-0bb8-4e51-823f-24e4339d3f60-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.167184 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.167195 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6156b50d-0bb8-4e51-823f-24e4339d3f60-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.168235 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8a3b2b-5591-4490-9c73-be998b8ba644-client-ca\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.168500 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8a3b2b-5591-4490-9c73-be998b8ba644-config\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.171304 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8a3b2b-5591-4490-9c73-be998b8ba644-serving-cert\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.187704 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxktd\" (UniqueName: \"kubernetes.io/projected/7a8a3b2b-5591-4490-9c73-be998b8ba644-kube-api-access-rxktd\") pod \"route-controller-manager-6bc9b688db-9c7p5\" (UID: \"7a8a3b2b-5591-4490-9c73-be998b8ba644\") " pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.206767 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q"] Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.212089 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59bd9699-8pq6q"] Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.236702 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.473308 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5"] Jan 28 17:22:04 crc kubenswrapper[5001]: W0128 17:22:04.479229 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a8a3b2b_5591_4490_9c73_be998b8ba644.slice/crio-9ae9f28ca12414343b39ec8e5b42a2c6e2856762b3c6f0d9bfd7483d3de225d2 WatchSource:0}: Error finding container 9ae9f28ca12414343b39ec8e5b42a2c6e2856762b3c6f0d9bfd7483d3de225d2: Status 404 returned error can't find the container with id 9ae9f28ca12414343b39ec8e5b42a2c6e2856762b3c6f0d9bfd7483d3de225d2 Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.602326 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6156b50d-0bb8-4e51-823f-24e4339d3f60" path="/var/lib/kubelet/pods/6156b50d-0bb8-4e51-823f-24e4339d3f60/volumes" Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.857925 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:22:04 crc kubenswrapper[5001]: I0128 17:22:04.858013 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:22:05 crc kubenswrapper[5001]: I0128 17:22:05.170252 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" event={"ID":"7a8a3b2b-5591-4490-9c73-be998b8ba644","Type":"ContainerStarted","Data":"62d848c693a2e4ef8c35a10ea1ad40e1edbb8c41251f379da2cb7a3d18b4c755"} Jan 28 17:22:05 crc kubenswrapper[5001]: I0128 17:22:05.170318 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" event={"ID":"7a8a3b2b-5591-4490-9c73-be998b8ba644","Type":"ContainerStarted","Data":"9ae9f28ca12414343b39ec8e5b42a2c6e2856762b3c6f0d9bfd7483d3de225d2"} Jan 28 17:22:05 crc kubenswrapper[5001]: I0128 17:22:05.170551 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:05 crc kubenswrapper[5001]: I0128 17:22:05.196394 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" podStartSLOduration=3.196373877 podStartE2EDuration="3.196373877s" podCreationTimestamp="2026-01-28 17:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:05.192906818 +0000 UTC m=+371.360695048" watchObservedRunningTime="2026-01-28 17:22:05.196373877 +0000 UTC m=+371.364162127" Jan 28 17:22:05 crc kubenswrapper[5001]: I0128 17:22:05.521436 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bc9b688db-9c7p5" Jan 28 17:22:06 crc kubenswrapper[5001]: I0128 17:22:06.574106 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97z2w"] Jan 28 17:22:06 crc kubenswrapper[5001]: I0128 17:22:06.574348 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-97z2w" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="registry-server" containerID="cri-o://f9a9097838e06b3cad8fd739c837933210ad56d1e7773d85b88f6f75e9bc11fa" gracePeriod=2 Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.189557 5001 generic.go:334] "Generic (PLEG): container finished" podID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerID="f9a9097838e06b3cad8fd739c837933210ad56d1e7773d85b88f6f75e9bc11fa" exitCode=0 Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.189727 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerDied","Data":"f9a9097838e06b3cad8fd739c837933210ad56d1e7773d85b88f6f75e9bc11fa"} Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.574618 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.710312 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-utilities\") pod \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.710445 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqqq2\" (UniqueName: \"kubernetes.io/projected/04b4625e-0b3f-44a9-b1a9-5855e74eef29-kube-api-access-hqqq2\") pod \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.710473 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-catalog-content\") pod \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\" (UID: \"04b4625e-0b3f-44a9-b1a9-5855e74eef29\") " Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.712046 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-utilities" (OuterVolumeSpecName: "utilities") pod "04b4625e-0b3f-44a9-b1a9-5855e74eef29" (UID: "04b4625e-0b3f-44a9-b1a9-5855e74eef29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.724829 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b4625e-0b3f-44a9-b1a9-5855e74eef29-kube-api-access-hqqq2" (OuterVolumeSpecName: "kube-api-access-hqqq2") pod "04b4625e-0b3f-44a9-b1a9-5855e74eef29" (UID: "04b4625e-0b3f-44a9-b1a9-5855e74eef29"). InnerVolumeSpecName "kube-api-access-hqqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.812318 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.812353 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqqq2\" (UniqueName: \"kubernetes.io/projected/04b4625e-0b3f-44a9-b1a9-5855e74eef29-kube-api-access-hqqq2\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.835721 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04b4625e-0b3f-44a9-b1a9-5855e74eef29" (UID: "04b4625e-0b3f-44a9-b1a9-5855e74eef29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:07 crc kubenswrapper[5001]: I0128 17:22:07.914020 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b4625e-0b3f-44a9-b1a9-5855e74eef29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.197008 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97z2w" event={"ID":"04b4625e-0b3f-44a9-b1a9-5855e74eef29","Type":"ContainerDied","Data":"33f3ecbbfb28458768f8f566cba0e4dbc462246462441026e402e55ead16021f"} Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.197062 5001 scope.go:117] "RemoveContainer" containerID="f9a9097838e06b3cad8fd739c837933210ad56d1e7773d85b88f6f75e9bc11fa" Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.197201 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97z2w" Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.220374 5001 scope.go:117] "RemoveContainer" containerID="6502a47380c043640831573c7e7e2f48c56336e42021abe20f4b377e4fb9b5e2" Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.246570 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97z2w"] Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.249735 5001 scope.go:117] "RemoveContainer" containerID="a6930246aa953025bf800fa9d6f4d92238388b004e8fb84d5dc49234b045cb05" Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.254095 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-97z2w"] Jan 28 17:22:08 crc kubenswrapper[5001]: I0128 17:22:08.601290 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" path="/var/lib/kubelet/pods/04b4625e-0b3f-44a9-b1a9-5855e74eef29/volumes" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.102538 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-r8qqd" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.185199 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5ns7t"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.807381 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnl7x"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.816746 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsncd"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.817004 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gsncd" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="registry-server" containerID="cri-o://bd0bf2b2cfb3407f10dd3eeb25ae3eea7ca549b8a412f4548afeda5c3b4f41a5" gracePeriod=30 Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.824244 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zkdv"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.824440 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerName="marketplace-operator" containerID="cri-o://08e75c7b1bbb1ecdc1089f8362762aae879be7cb50c3d44c4382c5ed52e72480" gracePeriod=30 Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.834841 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-92vjb"] Jan 28 17:22:21 crc kubenswrapper[5001]: E0128 17:22:21.835081 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="extract-utilities" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.835094 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="extract-utilities" Jan 28 17:22:21 crc kubenswrapper[5001]: E0128 17:22:21.835109 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="extract-content" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.835115 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="extract-content" Jan 28 17:22:21 crc kubenswrapper[5001]: E0128 17:22:21.835128 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="registry-server" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.835134 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="registry-server" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.835218 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b4625e-0b3f-44a9-b1a9-5855e74eef29" containerName="registry-server" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.835644 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.846477 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj9wh"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.846712 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bj9wh" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="registry-server" containerID="cri-o://03775ff560b8149fc347f04a8699aa469dd86f375151b4d2aaa3c8241df6a115" gracePeriod=30 Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.851600 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mp88"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.851848 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2mp88" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="registry-server" containerID="cri-o://069e42f02a3bfe167b6cc4e56309c801cfde3ce31e08e5e86f8750af464bc2ab" gracePeriod=30 Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.857927 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-92vjb"] Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.905184 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgg2\" (UniqueName: \"kubernetes.io/projected/32ddd6d6-443e-4772-823f-9ff2580fa385-kube-api-access-wjgg2\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.905538 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ddd6d6-443e-4772-823f-9ff2580fa385-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:21 crc kubenswrapper[5001]: I0128 17:22:21.905711 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32ddd6d6-443e-4772-823f-9ff2580fa385-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.006606 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32ddd6d6-443e-4772-823f-9ff2580fa385-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.006718 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjgg2\" (UniqueName: \"kubernetes.io/projected/32ddd6d6-443e-4772-823f-9ff2580fa385-kube-api-access-wjgg2\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.006758 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ddd6d6-443e-4772-823f-9ff2580fa385-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.007898 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32ddd6d6-443e-4772-823f-9ff2580fa385-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.015115 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32ddd6d6-443e-4772-823f-9ff2580fa385-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.025711 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjgg2\" (UniqueName: \"kubernetes.io/projected/32ddd6d6-443e-4772-823f-9ff2580fa385-kube-api-access-wjgg2\") pod \"marketplace-operator-79b997595-92vjb\" (UID: \"32ddd6d6-443e-4772-823f-9ff2580fa385\") " pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.152463 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.288782 5001 generic.go:334] "Generic (PLEG): container finished" podID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerID="03775ff560b8149fc347f04a8699aa469dd86f375151b4d2aaa3c8241df6a115" exitCode=0 Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.289212 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerDied","Data":"03775ff560b8149fc347f04a8699aa469dd86f375151b4d2aaa3c8241df6a115"} Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.293401 5001 generic.go:334] "Generic (PLEG): container finished" podID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerID="bd0bf2b2cfb3407f10dd3eeb25ae3eea7ca549b8a412f4548afeda5c3b4f41a5" exitCode=0 Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.293463 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerDied","Data":"bd0bf2b2cfb3407f10dd3eeb25ae3eea7ca549b8a412f4548afeda5c3b4f41a5"} Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.295071 5001 generic.go:334] "Generic (PLEG): container finished" podID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerID="08e75c7b1bbb1ecdc1089f8362762aae879be7cb50c3d44c4382c5ed52e72480" exitCode=0 Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.295110 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" event={"ID":"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e","Type":"ContainerDied","Data":"08e75c7b1bbb1ecdc1089f8362762aae879be7cb50c3d44c4382c5ed52e72480"} Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.297856 5001 generic.go:334] "Generic (PLEG): container finished" podID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerID="069e42f02a3bfe167b6cc4e56309c801cfde3ce31e08e5e86f8750af464bc2ab" exitCode=0 Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.298006 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerDied","Data":"069e42f02a3bfe167b6cc4e56309c801cfde3ce31e08e5e86f8750af464bc2ab"} Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.298108 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnl7x" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="registry-server" containerID="cri-o://c7bad77984c4ebe1a4ae790931c775b0531530425e4f12edd70f4b70776923ef" gracePeriod=30 Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.590745 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-92vjb"] Jan 28 17:22:22 crc kubenswrapper[5001]: W0128 17:22:22.635779 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ddd6d6_443e_4772_823f_9ff2580fa385.slice/crio-11f15a05a08e9d18fab95839d642dace09e7c89bb59e18620fae60836228015c WatchSource:0}: Error finding container 11f15a05a08e9d18fab95839d642dace09e7c89bb59e18620fae60836228015c: Status 404 returned error can't find the container with id 11f15a05a08e9d18fab95839d642dace09e7c89bb59e18620fae60836228015c Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.716158 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59f675d9d7-zgm46"] Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.722654 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" podUID="3c09413b-2c61-4fe0-b39e-c23fcf5e5034" containerName="controller-manager" containerID="cri-o://e97253145ecd5902f3b76938ad4f6ee9578816667c637302113398c4b6b740e0" gracePeriod=30 Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.850711 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.917251 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-utilities\") pod \"68df3eed-9a6f-4127-ac82-a61ae7216062\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.917301 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rx8\" (UniqueName: \"kubernetes.io/projected/68df3eed-9a6f-4127-ac82-a61ae7216062-kube-api-access-x7rx8\") pod \"68df3eed-9a6f-4127-ac82-a61ae7216062\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.917426 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-catalog-content\") pod \"68df3eed-9a6f-4127-ac82-a61ae7216062\" (UID: \"68df3eed-9a6f-4127-ac82-a61ae7216062\") " Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.918323 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-utilities" (OuterVolumeSpecName: "utilities") pod "68df3eed-9a6f-4127-ac82-a61ae7216062" (UID: "68df3eed-9a6f-4127-ac82-a61ae7216062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.919071 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.927138 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68df3eed-9a6f-4127-ac82-a61ae7216062-kube-api-access-x7rx8" (OuterVolumeSpecName: "kube-api-access-x7rx8") pod "68df3eed-9a6f-4127-ac82-a61ae7216062" (UID: "68df3eed-9a6f-4127-ac82-a61ae7216062"). InnerVolumeSpecName "kube-api-access-x7rx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:22 crc kubenswrapper[5001]: I0128 17:22:22.988474 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68df3eed-9a6f-4127-ac82-a61ae7216062" (UID: "68df3eed-9a6f-4127-ac82-a61ae7216062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.001646 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.006194 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.018643 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-catalog-content\") pod \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.018721 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j942c\" (UniqueName: \"kubernetes.io/projected/c43a921e-0efa-4e2c-b425-21f7cd87a24b-kube-api-access-j942c\") pod \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.018751 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-utilities\") pod \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\" (UID: \"c43a921e-0efa-4e2c-b425-21f7cd87a24b\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.018968 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.019058 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68df3eed-9a6f-4127-ac82-a61ae7216062-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.019071 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rx8\" (UniqueName: \"kubernetes.io/projected/68df3eed-9a6f-4127-ac82-a61ae7216062-kube-api-access-x7rx8\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.019726 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-utilities" (OuterVolumeSpecName: "utilities") pod "c43a921e-0efa-4e2c-b425-21f7cd87a24b" (UID: "c43a921e-0efa-4e2c-b425-21f7cd87a24b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.030946 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43a921e-0efa-4e2c-b425-21f7cd87a24b-kube-api-access-j942c" (OuterVolumeSpecName: "kube-api-access-j942c") pod "c43a921e-0efa-4e2c-b425-21f7cd87a24b" (UID: "c43a921e-0efa-4e2c-b425-21f7cd87a24b"). InnerVolumeSpecName "kube-api-access-j942c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120037 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-trusted-ca\") pod \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120106 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdqvh\" (UniqueName: \"kubernetes.io/projected/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-kube-api-access-kdqvh\") pod \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120132 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncwkv\" (UniqueName: \"kubernetes.io/projected/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-kube-api-access-ncwkv\") pod \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120175 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-catalog-content\") pod \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120203 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-operator-metrics\") pod \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\" (UID: \"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120222 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-utilities\") pod \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\" (UID: \"24b568f4-71c2-4cae-932f-b6f1a2daf7a5\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120435 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j942c\" (UniqueName: \"kubernetes.io/projected/c43a921e-0efa-4e2c-b425-21f7cd87a24b-kube-api-access-j942c\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.120446 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.121210 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-utilities" (OuterVolumeSpecName: "utilities") pod "24b568f4-71c2-4cae-932f-b6f1a2daf7a5" (UID: "24b568f4-71c2-4cae-932f-b6f1a2daf7a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.121629 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" (UID: "df3e2eda-99b8-401a-bfe3-4ebc0ba7628e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.146500 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24b568f4-71c2-4cae-932f-b6f1a2daf7a5" (UID: "24b568f4-71c2-4cae-932f-b6f1a2daf7a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.149170 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-kube-api-access-kdqvh" (OuterVolumeSpecName: "kube-api-access-kdqvh") pod "24b568f4-71c2-4cae-932f-b6f1a2daf7a5" (UID: "24b568f4-71c2-4cae-932f-b6f1a2daf7a5"). InnerVolumeSpecName "kube-api-access-kdqvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.149326 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-kube-api-access-ncwkv" (OuterVolumeSpecName: "kube-api-access-ncwkv") pod "df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" (UID: "df3e2eda-99b8-401a-bfe3-4ebc0ba7628e"). InnerVolumeSpecName "kube-api-access-ncwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.149627 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" (UID: "df3e2eda-99b8-401a-bfe3-4ebc0ba7628e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.170516 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c43a921e-0efa-4e2c-b425-21f7cd87a24b" (UID: "c43a921e-0efa-4e2c-b425-21f7cd87a24b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222146 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222191 5001 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222205 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222213 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a921e-0efa-4e2c-b425-21f7cd87a24b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222222 5001 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222230 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdqvh\" (UniqueName: \"kubernetes.io/projected/24b568f4-71c2-4cae-932f-b6f1a2daf7a5-kube-api-access-kdqvh\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.222252 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncwkv\" (UniqueName: \"kubernetes.io/projected/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e-kube-api-access-ncwkv\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.305276 5001 generic.go:334] "Generic (PLEG): container finished" podID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerID="c7bad77984c4ebe1a4ae790931c775b0531530425e4f12edd70f4b70776923ef" exitCode=0 Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.306079 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnl7x" event={"ID":"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41","Type":"ContainerDied","Data":"c7bad77984c4ebe1a4ae790931c775b0531530425e4f12edd70f4b70776923ef"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.309672 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj9wh" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.309677 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj9wh" event={"ID":"24b568f4-71c2-4cae-932f-b6f1a2daf7a5","Type":"ContainerDied","Data":"755ee0e3b6b2c2dae42728f12e425fe78519d06cf17071c80f41fc6a06c093aa"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.309808 5001 scope.go:117] "RemoveContainer" containerID="03775ff560b8149fc347f04a8699aa469dd86f375151b4d2aaa3c8241df6a115" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.311208 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" event={"ID":"32ddd6d6-443e-4772-823f-9ff2580fa385","Type":"ContainerStarted","Data":"01173be5f0fb0f1e172da292d0b5b2717e794d62092302b38d9303c9d929ccaf"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.311252 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" event={"ID":"32ddd6d6-443e-4772-823f-9ff2580fa385","Type":"ContainerStarted","Data":"11f15a05a08e9d18fab95839d642dace09e7c89bb59e18620fae60836228015c"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.313186 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsncd" event={"ID":"68df3eed-9a6f-4127-ac82-a61ae7216062","Type":"ContainerDied","Data":"2eb82eec7d204d66c90ebf9db71d9001359711eb5886882fc334dd4b240ae7dc"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.313256 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsncd" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.314576 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" event={"ID":"df3e2eda-99b8-401a-bfe3-4ebc0ba7628e","Type":"ContainerDied","Data":"3e8f0e0082effeee5b0140c5f9c0998c57a35e05c20cb18bb89f1b48dee986ba"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.314649 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zkdv" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.320304 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mp88" event={"ID":"c43a921e-0efa-4e2c-b425-21f7cd87a24b","Type":"ContainerDied","Data":"b8bc4cf6157f57bc472a9a40eb83c0dd0eddd8637324ee1b377d4daadb163ed2"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.320432 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mp88" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.326754 5001 generic.go:334] "Generic (PLEG): container finished" podID="3c09413b-2c61-4fe0-b39e-c23fcf5e5034" containerID="e97253145ecd5902f3b76938ad4f6ee9578816667c637302113398c4b6b740e0" exitCode=0 Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.326803 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" event={"ID":"3c09413b-2c61-4fe0-b39e-c23fcf5e5034","Type":"ContainerDied","Data":"e97253145ecd5902f3b76938ad4f6ee9578816667c637302113398c4b6b740e0"} Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.328111 5001 scope.go:117] "RemoveContainer" containerID="ec744f00094c6841ef4c040a09830025f9dfadc7ca0d7460b77c874b10cea178" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.352657 5001 scope.go:117] "RemoveContainer" containerID="a7cdfb80bf9d2bda1ce357068bcb82c6faa3fe3b6450361998102f28ab217842" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.357112 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj9wh"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.361407 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj9wh"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.365497 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.377789 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsncd"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.388347 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gsncd"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.396600 5001 scope.go:117] "RemoveContainer" containerID="bd0bf2b2cfb3407f10dd3eeb25ae3eea7ca549b8a412f4548afeda5c3b4f41a5" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.397824 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zkdv"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.404565 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zkdv"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.421688 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mp88"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.424198 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzzb8\" (UniqueName: \"kubernetes.io/projected/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-kube-api-access-kzzb8\") pod \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.424249 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-utilities\") pod \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.424285 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-catalog-content\") pod \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\" (UID: \"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41\") " Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.425864 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2mp88"] Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.426630 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-utilities" (OuterVolumeSpecName: "utilities") pod "c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" (UID: "c648cc46-2f0e-4c7f-aaeb-a6abf4486e41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.430171 5001 scope.go:117] "RemoveContainer" containerID="6680bed6587e40285cf0aee06ccef3ec65375c512a9d290add67d892cabf9695" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.430172 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-kube-api-access-kzzb8" (OuterVolumeSpecName: "kube-api-access-kzzb8") pod "c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" (UID: "c648cc46-2f0e-4c7f-aaeb-a6abf4486e41"). InnerVolumeSpecName "kube-api-access-kzzb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.465151 5001 scope.go:117] "RemoveContainer" containerID="79491c31565cee7aa3620bc2650420ca7b5676eca5ad7e8ed4865a9d714c74c1" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.489560 5001 scope.go:117] "RemoveContainer" containerID="08e75c7b1bbb1ecdc1089f8362762aae879be7cb50c3d44c4382c5ed52e72480" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.499148 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" (UID: "c648cc46-2f0e-4c7f-aaeb-a6abf4486e41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.513426 5001 scope.go:117] "RemoveContainer" containerID="069e42f02a3bfe167b6cc4e56309c801cfde3ce31e08e5e86f8750af464bc2ab" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.525014 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.525055 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzzb8\" (UniqueName: \"kubernetes.io/projected/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-kube-api-access-kzzb8\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.525069 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.535278 5001 scope.go:117] "RemoveContainer" containerID="d42bedf41a5192321adc1a066c806a98a9325babe2fd11f3a168bfe772eb8da7" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.550171 5001 scope.go:117] "RemoveContainer" containerID="745ce5be4672d3adbde0786eab39d20b627ee4a25edf918853d83f74304b2360" Jan 28 17:22:23 crc kubenswrapper[5001]: I0128 17:22:23.988554 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.038640 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ndsd8"] Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.038911 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.038927 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.038925 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-config\") pod \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039022 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-proxy-ca-bundles\") pod \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039055 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4mt9\" (UniqueName: \"kubernetes.io/projected/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-kube-api-access-v4mt9\") pod \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039072 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-serving-cert\") pod \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039141 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-client-ca\") pod \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\" (UID: \"3c09413b-2c61-4fe0-b39e-c23fcf5e5034\") " Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039918 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c09413b-2c61-4fe0-b39e-c23fcf5e5034" (UID: "3c09413b-2c61-4fe0-b39e-c23fcf5e5034"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.038937 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039992 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040007 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c09413b-2c61-4fe0-b39e-c23fcf5e5034" containerName="controller-manager" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040015 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c09413b-2c61-4fe0-b39e-c23fcf5e5034" containerName="controller-manager" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040025 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerName="marketplace-operator" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040031 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerName="marketplace-operator" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040039 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040046 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040054 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040060 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040068 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040076 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040086 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040092 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040104 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040110 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040123 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040129 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040135 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040141 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040150 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040155 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="extract-content" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040165 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040170 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: E0128 17:22:24.040181 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040186 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="extract-utilities" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040329 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040338 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" containerName="marketplace-operator" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040348 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040356 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c09413b-2c61-4fe0-b39e-c23fcf5e5034" containerName="controller-manager" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040364 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.040371 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" containerName="registry-server" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.041150 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.039996 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-config" (OuterVolumeSpecName: "config") pod "3c09413b-2c61-4fe0-b39e-c23fcf5e5034" (UID: "3c09413b-2c61-4fe0-b39e-c23fcf5e5034"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.044517 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-kube-api-access-v4mt9" (OuterVolumeSpecName: "kube-api-access-v4mt9") pod "3c09413b-2c61-4fe0-b39e-c23fcf5e5034" (UID: "3c09413b-2c61-4fe0-b39e-c23fcf5e5034"). InnerVolumeSpecName "kube-api-access-v4mt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.044907 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c09413b-2c61-4fe0-b39e-c23fcf5e5034" (UID: "3c09413b-2c61-4fe0-b39e-c23fcf5e5034"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.046344 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c09413b-2c61-4fe0-b39e-c23fcf5e5034" (UID: "3c09413b-2c61-4fe0-b39e-c23fcf5e5034"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.049144 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.053913 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndsd8"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141042 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4hb\" (UniqueName: \"kubernetes.io/projected/01a6f242-b936-4752-b868-ebffda3b8657-kube-api-access-mp4hb\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141153 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-utilities\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141362 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-catalog-content\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141488 5001 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141503 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4mt9\" (UniqueName: \"kubernetes.io/projected/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-kube-api-access-v4mt9\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141514 5001 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141523 5001 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.141532 5001 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c09413b-2c61-4fe0-b39e-c23fcf5e5034-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.218709 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4q4m6"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.220155 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.224049 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.233373 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q4m6"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.242890 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560f1835-9368-4231-9d85-b0cbcad12b8c-catalog-content\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.242936 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-utilities\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.243250 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-catalog-content\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.243291 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp4hb\" (UniqueName: \"kubernetes.io/projected/01a6f242-b936-4752-b868-ebffda3b8657-kube-api-access-mp4hb\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.243316 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtqx\" (UniqueName: \"kubernetes.io/projected/560f1835-9368-4231-9d85-b0cbcad12b8c-kube-api-access-vmtqx\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.243335 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560f1835-9368-4231-9d85-b0cbcad12b8c-utilities\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.243686 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-utilities\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.244135 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-catalog-content\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.262821 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp4hb\" (UniqueName: \"kubernetes.io/projected/01a6f242-b936-4752-b868-ebffda3b8657-kube-api-access-mp4hb\") pod \"community-operators-ndsd8\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.334274 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnl7x" event={"ID":"c648cc46-2f0e-4c7f-aaeb-a6abf4486e41","Type":"ContainerDied","Data":"56c2a4fe4a2429a55142dae7fe5800d7f8df1058a0c85da418001865f9c98127"} Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.334325 5001 scope.go:117] "RemoveContainer" containerID="c7bad77984c4ebe1a4ae790931c775b0531530425e4f12edd70f4b70776923ef" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.334294 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnl7x" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.340246 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.340848 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f675d9d7-zgm46" event={"ID":"3c09413b-2c61-4fe0-b39e-c23fcf5e5034","Type":"ContainerDied","Data":"1d995bd133dc31d25e4c61463f9f765542d9291155d9e736a2e9916bec72f0e8"} Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.342022 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.344243 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtqx\" (UniqueName: \"kubernetes.io/projected/560f1835-9368-4231-9d85-b0cbcad12b8c-kube-api-access-vmtqx\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.344295 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560f1835-9368-4231-9d85-b0cbcad12b8c-utilities\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.344331 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560f1835-9368-4231-9d85-b0cbcad12b8c-catalog-content\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.344731 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560f1835-9368-4231-9d85-b0cbcad12b8c-utilities\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.344835 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560f1835-9368-4231-9d85-b0cbcad12b8c-catalog-content\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.346161 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.354504 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-92vjb" podStartSLOduration=3.354487703 podStartE2EDuration="3.354487703s" podCreationTimestamp="2026-01-28 17:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:24.353465973 +0000 UTC m=+390.521254223" watchObservedRunningTime="2026-01-28 17:22:24.354487703 +0000 UTC m=+390.522275933" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.358745 5001 scope.go:117] "RemoveContainer" containerID="8ed2adce3bf674680295c7beb112e3e399673faf8cc96af7eefe9f0ecb8b5bb4" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.364775 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmtqx\" (UniqueName: \"kubernetes.io/projected/560f1835-9368-4231-9d85-b0cbcad12b8c-kube-api-access-vmtqx\") pod \"redhat-marketplace-4q4m6\" (UID: \"560f1835-9368-4231-9d85-b0cbcad12b8c\") " pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.372144 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnl7x"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.373268 5001 scope.go:117] "RemoveContainer" containerID="95b4e378147c98ebd50f79875305823eef4c18e1936d595ae1cced61e727cf69" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.377109 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnl7x"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.385128 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.394523 5001 scope.go:117] "RemoveContainer" containerID="e97253145ecd5902f3b76938ad4f6ee9578816667c637302113398c4b6b740e0" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.406602 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59f675d9d7-zgm46"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.410874 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59f675d9d7-zgm46"] Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.543295 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.600766 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b568f4-71c2-4cae-932f-b6f1a2daf7a5" path="/var/lib/kubelet/pods/24b568f4-71c2-4cae-932f-b6f1a2daf7a5/volumes" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.601562 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c09413b-2c61-4fe0-b39e-c23fcf5e5034" path="/var/lib/kubelet/pods/3c09413b-2c61-4fe0-b39e-c23fcf5e5034/volumes" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.602126 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68df3eed-9a6f-4127-ac82-a61ae7216062" path="/var/lib/kubelet/pods/68df3eed-9a6f-4127-ac82-a61ae7216062/volumes" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.604496 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43a921e-0efa-4e2c-b425-21f7cd87a24b" path="/var/lib/kubelet/pods/c43a921e-0efa-4e2c-b425-21f7cd87a24b/volumes" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.606715 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c648cc46-2f0e-4c7f-aaeb-a6abf4486e41" path="/var/lib/kubelet/pods/c648cc46-2f0e-4c7f-aaeb-a6abf4486e41/volumes" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.610317 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3e2eda-99b8-401a-bfe3-4ebc0ba7628e" path="/var/lib/kubelet/pods/df3e2eda-99b8-401a-bfe3-4ebc0ba7628e/volumes" Jan 28 17:22:24 crc kubenswrapper[5001]: I0128 17:22:24.635638 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndsd8"] Jan 28 17:22:24 crc kubenswrapper[5001]: W0128 17:22:24.643821 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01a6f242_b936_4752_b868_ebffda3b8657.slice/crio-5cdefeae01218ad0229f1a5ee32a5f22cdc39f088b4d53379802685487a10cf7 WatchSource:0}: Error finding container 5cdefeae01218ad0229f1a5ee32a5f22cdc39f088b4d53379802685487a10cf7: Status 404 returned error can't find the container with id 5cdefeae01218ad0229f1a5ee32a5f22cdc39f088b4d53379802685487a10cf7 Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.345602 5001 generic.go:334] "Generic (PLEG): container finished" podID="01a6f242-b936-4752-b868-ebffda3b8657" containerID="6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952" exitCode=0 Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.345671 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerDied","Data":"6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952"} Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.345890 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerStarted","Data":"5cdefeae01218ad0229f1a5ee32a5f22cdc39f088b4d53379802685487a10cf7"} Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.795384 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-z4mpl"] Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.796236 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.801639 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-z4mpl"] Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.802944 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.803136 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.803234 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.803334 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.803474 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.803577 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.806919 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.875819 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-config\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.875876 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-serving-cert\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.875922 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-proxy-ca-bundles\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.875936 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-client-ca\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.875954 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljkd\" (UniqueName: \"kubernetes.io/projected/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-kube-api-access-6ljkd\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.977331 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-config\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.977425 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-serving-cert\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.977498 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-proxy-ca-bundles\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.977535 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-client-ca\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.977568 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljkd\" (UniqueName: \"kubernetes.io/projected/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-kube-api-access-6ljkd\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.978777 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-client-ca\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.979032 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-proxy-ca-bundles\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.979068 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-config\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.984375 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-serving-cert\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:25.996418 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljkd\" (UniqueName: \"kubernetes.io/projected/5dfc0f2b-ae51-46e9-b05a-91931d79d9b2-kube-api-access-6ljkd\") pod \"controller-manager-55d4c5b566-z4mpl\" (UID: \"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2\") " pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.148411 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.429288 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9tlg9"] Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.430361 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.432782 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.446095 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tlg9"] Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.483724 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aedcee55-c6fd-4322-b635-e0fca159ef41-catalog-content\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.483834 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aedcee55-c6fd-4322-b635-e0fca159ef41-utilities\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.483866 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxv88\" (UniqueName: \"kubernetes.io/projected/aedcee55-c6fd-4322-b635-e0fca159ef41-kube-api-access-xxv88\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.586447 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aedcee55-c6fd-4322-b635-e0fca159ef41-utilities\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.586610 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxv88\" (UniqueName: \"kubernetes.io/projected/aedcee55-c6fd-4322-b635-e0fca159ef41-kube-api-access-xxv88\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.586670 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aedcee55-c6fd-4322-b635-e0fca159ef41-catalog-content\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.587024 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aedcee55-c6fd-4322-b635-e0fca159ef41-utilities\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.587416 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aedcee55-c6fd-4322-b635-e0fca159ef41-catalog-content\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.604448 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxv88\" (UniqueName: \"kubernetes.io/projected/aedcee55-c6fd-4322-b635-e0fca159ef41-kube-api-access-xxv88\") pod \"redhat-operators-9tlg9\" (UID: \"aedcee55-c6fd-4322-b635-e0fca159ef41\") " pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.622877 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dmrx4"] Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.623811 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.625234 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.633197 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmrx4"] Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.687497 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1cfc19-6968-49a9-ac16-d66f8f79873e-utilities\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.687558 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjh4\" (UniqueName: \"kubernetes.io/projected/8a1cfc19-6968-49a9-ac16-d66f8f79873e-kube-api-access-fzjh4\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.687691 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1cfc19-6968-49a9-ac16-d66f8f79873e-catalog-content\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.746934 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.789337 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzjh4\" (UniqueName: \"kubernetes.io/projected/8a1cfc19-6968-49a9-ac16-d66f8f79873e-kube-api-access-fzjh4\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.789423 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1cfc19-6968-49a9-ac16-d66f8f79873e-catalog-content\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.789460 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1cfc19-6968-49a9-ac16-d66f8f79873e-utilities\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.789898 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1cfc19-6968-49a9-ac16-d66f8f79873e-utilities\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.790476 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1cfc19-6968-49a9-ac16-d66f8f79873e-catalog-content\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.808368 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzjh4\" (UniqueName: \"kubernetes.io/projected/8a1cfc19-6968-49a9-ac16-d66f8f79873e-kube-api-access-fzjh4\") pod \"certified-operators-dmrx4\" (UID: \"8a1cfc19-6968-49a9-ac16-d66f8f79873e\") " pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:27 crc kubenswrapper[5001]: I0128 17:22:26.947522 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.231635 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55d4c5b566-z4mpl"] Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.241084 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmrx4"] Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.245810 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tlg9"] Jan 28 17:22:28 crc kubenswrapper[5001]: W0128 17:22:28.258621 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaedcee55_c6fd_4322_b635_e0fca159ef41.slice/crio-04c5476256cacde306bf2d7268153cfcb7c87fe580a1b51540984ac28f4d218c WatchSource:0}: Error finding container 04c5476256cacde306bf2d7268153cfcb7c87fe580a1b51540984ac28f4d218c: Status 404 returned error can't find the container with id 04c5476256cacde306bf2d7268153cfcb7c87fe580a1b51540984ac28f4d218c Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.261025 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4q4m6"] Jan 28 17:22:28 crc kubenswrapper[5001]: W0128 17:22:28.274893 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod560f1835_9368_4231_9d85_b0cbcad12b8c.slice/crio-763f6bbb4353c33a5065cf6deabe43c5410249517707e6c90b0b0eff5dfbed51 WatchSource:0}: Error finding container 763f6bbb4353c33a5065cf6deabe43c5410249517707e6c90b0b0eff5dfbed51: Status 404 returned error can't find the container with id 763f6bbb4353c33a5065cf6deabe43c5410249517707e6c90b0b0eff5dfbed51 Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.365637 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" event={"ID":"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2","Type":"ContainerStarted","Data":"ba79bd7832aca8999093e44c97e45e8edc6dfdb2e6df74459166f0d85e4141ef"} Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.370929 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tlg9" event={"ID":"aedcee55-c6fd-4322-b635-e0fca159ef41","Type":"ContainerStarted","Data":"04c5476256cacde306bf2d7268153cfcb7c87fe580a1b51540984ac28f4d218c"} Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.372559 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q4m6" event={"ID":"560f1835-9368-4231-9d85-b0cbcad12b8c","Type":"ContainerStarted","Data":"763f6bbb4353c33a5065cf6deabe43c5410249517707e6c90b0b0eff5dfbed51"} Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.374692 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerStarted","Data":"8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073"} Jan 28 17:22:28 crc kubenswrapper[5001]: I0128 17:22:28.376000 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmrx4" event={"ID":"8a1cfc19-6968-49a9-ac16-d66f8f79873e","Type":"ContainerStarted","Data":"adb8d0b5910bf83cd8abe89ec775474790de9437698d115c1cf60fd568eeae1a"} Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.398314 5001 generic.go:334] "Generic (PLEG): container finished" podID="8a1cfc19-6968-49a9-ac16-d66f8f79873e" containerID="3af063fa5a1a2d62e2eb049c53961611997377ed6441f989da03192012626720" exitCode=0 Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.398441 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmrx4" event={"ID":"8a1cfc19-6968-49a9-ac16-d66f8f79873e","Type":"ContainerDied","Data":"3af063fa5a1a2d62e2eb049c53961611997377ed6441f989da03192012626720"} Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.405452 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" event={"ID":"5dfc0f2b-ae51-46e9-b05a-91931d79d9b2","Type":"ContainerStarted","Data":"e17ed7e634e3d8075bb0ba884315644c51526252c9a13e0b660dad1d1997ad89"} Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.405746 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.407487 5001 generic.go:334] "Generic (PLEG): container finished" podID="aedcee55-c6fd-4322-b635-e0fca159ef41" containerID="80da24876f4bae3e7aa86876a1e187ec1306cd58e746a61e55716081f2281269" exitCode=0 Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.407538 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tlg9" event={"ID":"aedcee55-c6fd-4322-b635-e0fca159ef41","Type":"ContainerDied","Data":"80da24876f4bae3e7aa86876a1e187ec1306cd58e746a61e55716081f2281269"} Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.409182 5001 generic.go:334] "Generic (PLEG): container finished" podID="560f1835-9368-4231-9d85-b0cbcad12b8c" containerID="644f7fb70750b27077e04efdd574cea09396d6e04674fb13933819394b9b5086" exitCode=0 Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.409236 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q4m6" event={"ID":"560f1835-9368-4231-9d85-b0cbcad12b8c","Type":"ContainerDied","Data":"644f7fb70750b27077e04efdd574cea09396d6e04674fb13933819394b9b5086"} Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.418019 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.420477 5001 generic.go:334] "Generic (PLEG): container finished" podID="01a6f242-b936-4752-b868-ebffda3b8657" containerID="8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073" exitCode=0 Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.420548 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerDied","Data":"8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073"} Jan 28 17:22:29 crc kubenswrapper[5001]: I0128 17:22:29.457880 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55d4c5b566-z4mpl" podStartSLOduration=7.457855692 podStartE2EDuration="7.457855692s" podCreationTimestamp="2026-01-28 17:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:29.455048232 +0000 UTC m=+395.622836462" watchObservedRunningTime="2026-01-28 17:22:29.457855692 +0000 UTC m=+395.625643922" Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.439922 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerStarted","Data":"fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000"} Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.441948 5001 generic.go:334] "Generic (PLEG): container finished" podID="8a1cfc19-6968-49a9-ac16-d66f8f79873e" containerID="2bf4da5047fe646583758daf4d7bf757896ce051260acaa7963fe5b446617c36" exitCode=0 Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.442034 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmrx4" event={"ID":"8a1cfc19-6968-49a9-ac16-d66f8f79873e","Type":"ContainerDied","Data":"2bf4da5047fe646583758daf4d7bf757896ce051260acaa7963fe5b446617c36"} Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.445440 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tlg9" event={"ID":"aedcee55-c6fd-4322-b635-e0fca159ef41","Type":"ContainerStarted","Data":"b7154e00921be41f6f9047f1ca1ec0e4a2566822d3581ce7f356f198cc3dbabf"} Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.449705 5001 generic.go:334] "Generic (PLEG): container finished" podID="560f1835-9368-4231-9d85-b0cbcad12b8c" containerID="bf61adf65141155d95de18c136f53910863d53f387a91b22d834dddb459773c7" exitCode=0 Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.449765 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q4m6" event={"ID":"560f1835-9368-4231-9d85-b0cbcad12b8c","Type":"ContainerDied","Data":"bf61adf65141155d95de18c136f53910863d53f387a91b22d834dddb459773c7"} Jan 28 17:22:32 crc kubenswrapper[5001]: I0128 17:22:32.466935 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ndsd8" podStartSLOduration=2.264591302 podStartE2EDuration="8.466919325s" podCreationTimestamp="2026-01-28 17:22:24 +0000 UTC" firstStartedPulling="2026-01-28 17:22:25.347299055 +0000 UTC m=+391.515087275" lastFinishedPulling="2026-01-28 17:22:31.549627068 +0000 UTC m=+397.717415298" observedRunningTime="2026-01-28 17:22:32.463880528 +0000 UTC m=+398.631668758" watchObservedRunningTime="2026-01-28 17:22:32.466919325 +0000 UTC m=+398.634707555" Jan 28 17:22:33 crc kubenswrapper[5001]: I0128 17:22:33.457228 5001 generic.go:334] "Generic (PLEG): container finished" podID="aedcee55-c6fd-4322-b635-e0fca159ef41" containerID="b7154e00921be41f6f9047f1ca1ec0e4a2566822d3581ce7f356f198cc3dbabf" exitCode=0 Jan 28 17:22:33 crc kubenswrapper[5001]: I0128 17:22:33.457322 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tlg9" event={"ID":"aedcee55-c6fd-4322-b635-e0fca159ef41","Type":"ContainerDied","Data":"b7154e00921be41f6f9047f1ca1ec0e4a2566822d3581ce7f356f198cc3dbabf"} Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.386609 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.387042 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.428462 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.464345 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4q4m6" event={"ID":"560f1835-9368-4231-9d85-b0cbcad12b8c","Type":"ContainerStarted","Data":"22a4adb0725e4b90d8cdf2b4112b6ce76d2036a4b306e795483a7ee354aa696f"} Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.466922 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmrx4" event={"ID":"8a1cfc19-6968-49a9-ac16-d66f8f79873e","Type":"ContainerStarted","Data":"a398709afc0ee89cf7b416e277e127fd6ad90cdef3af0dfbb896304e9651276d"} Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.834022 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:22:34 crc kubenswrapper[5001]: I0128 17:22:34.834554 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:22:35 crc kubenswrapper[5001]: I0128 17:22:35.493481 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dmrx4" podStartSLOduration=4.712428904 podStartE2EDuration="9.493465856s" podCreationTimestamp="2026-01-28 17:22:26 +0000 UTC" firstStartedPulling="2026-01-28 17:22:29.400134965 +0000 UTC m=+395.567923195" lastFinishedPulling="2026-01-28 17:22:34.181171917 +0000 UTC m=+400.348960147" observedRunningTime="2026-01-28 17:22:35.493348883 +0000 UTC m=+401.661137123" watchObservedRunningTime="2026-01-28 17:22:35.493465856 +0000 UTC m=+401.661254086" Jan 28 17:22:35 crc kubenswrapper[5001]: I0128 17:22:35.513834 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4q4m6" podStartSLOduration=7.250794019 podStartE2EDuration="11.513813867s" podCreationTimestamp="2026-01-28 17:22:24 +0000 UTC" firstStartedPulling="2026-01-28 17:22:29.410082139 +0000 UTC m=+395.577870369" lastFinishedPulling="2026-01-28 17:22:33.673101987 +0000 UTC m=+399.840890217" observedRunningTime="2026-01-28 17:22:35.512483129 +0000 UTC m=+401.680271379" watchObservedRunningTime="2026-01-28 17:22:35.513813867 +0000 UTC m=+401.681602097" Jan 28 17:22:36 crc kubenswrapper[5001]: I0128 17:22:36.480125 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tlg9" event={"ID":"aedcee55-c6fd-4322-b635-e0fca159ef41","Type":"ContainerStarted","Data":"70d305aa96bf4a3fe5e695f1674fe7e37ae321e094dcf9e062f59079ad396d8e"} Jan 28 17:22:36 crc kubenswrapper[5001]: I0128 17:22:36.747153 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:36 crc kubenswrapper[5001]: I0128 17:22:36.747264 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:36 crc kubenswrapper[5001]: I0128 17:22:36.948377 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:36 crc kubenswrapper[5001]: I0128 17:22:36.948433 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:36 crc kubenswrapper[5001]: I0128 17:22:36.985728 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:37 crc kubenswrapper[5001]: I0128 17:22:37.006566 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9tlg9" podStartSLOduration=4.936546461 podStartE2EDuration="11.006547977s" podCreationTimestamp="2026-01-28 17:22:26 +0000 UTC" firstStartedPulling="2026-01-28 17:22:29.408698519 +0000 UTC m=+395.576486749" lastFinishedPulling="2026-01-28 17:22:35.478700035 +0000 UTC m=+401.646488265" observedRunningTime="2026-01-28 17:22:36.534325371 +0000 UTC m=+402.702113601" watchObservedRunningTime="2026-01-28 17:22:37.006547977 +0000 UTC m=+403.174336207" Jan 28 17:22:37 crc kubenswrapper[5001]: I0128 17:22:37.786294 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9tlg9" podUID="aedcee55-c6fd-4322-b635-e0fca159ef41" containerName="registry-server" probeResult="failure" output=< Jan 28 17:22:37 crc kubenswrapper[5001]: timeout: failed to connect service ":50051" within 1s Jan 28 17:22:37 crc kubenswrapper[5001]: > Jan 28 17:22:44 crc kubenswrapper[5001]: I0128 17:22:44.425336 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:22:44 crc kubenswrapper[5001]: I0128 17:22:44.544679 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:44 crc kubenswrapper[5001]: I0128 17:22:44.544754 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:44 crc kubenswrapper[5001]: I0128 17:22:44.586671 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:45 crc kubenswrapper[5001]: I0128 17:22:45.585625 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4q4m6" Jan 28 17:22:46 crc kubenswrapper[5001]: I0128 17:22:46.226317 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" podUID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" containerName="registry" containerID="cri-o://c044da6f9679c571c6ebed6ab4ef90f540290a41588e08fd0fa25fc30a8a7544" gracePeriod=30 Jan 28 17:22:46 crc kubenswrapper[5001]: I0128 17:22:46.783434 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:46 crc kubenswrapper[5001]: I0128 17:22:46.827805 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9tlg9" Jan 28 17:22:46 crc kubenswrapper[5001]: I0128 17:22:46.984677 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dmrx4" Jan 28 17:22:47 crc kubenswrapper[5001]: I0128 17:22:47.285147 5001 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-5ns7t container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.22:5000/healthz\": dial tcp 10.217.0.22:5000: connect: connection refused" start-of-body= Jan 28 17:22:47 crc kubenswrapper[5001]: I0128 17:22:47.285218 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" podUID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.22:5000/healthz\": dial tcp 10.217.0.22:5000: connect: connection refused" Jan 28 17:22:47 crc kubenswrapper[5001]: I0128 17:22:47.556730 5001 generic.go:334] "Generic (PLEG): container finished" podID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" containerID="c044da6f9679c571c6ebed6ab4ef90f540290a41588e08fd0fa25fc30a8a7544" exitCode=0 Jan 28 17:22:47 crc kubenswrapper[5001]: I0128 17:22:47.556781 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" event={"ID":"36cbdaab-10af-401c-8ec0-867a5e82dc3d","Type":"ContainerDied","Data":"c044da6f9679c571c6ebed6ab4ef90f540290a41588e08fd0fa25fc30a8a7544"} Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.559121 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.562027 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" event={"ID":"36cbdaab-10af-401c-8ec0-867a5e82dc3d","Type":"ContainerDied","Data":"96df3d390b5ff2ac46519d8c61da211ae7d777ed41aea66429608a5ab59a68c7"} Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.562204 5001 scope.go:117] "RemoveContainer" containerID="c044da6f9679c571c6ebed6ab4ef90f540290a41588e08fd0fa25fc30a8a7544" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.562563 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5ns7t" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671150 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-bound-sa-token\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671204 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36cbdaab-10af-401c-8ec0-867a5e82dc3d-installation-pull-secrets\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671265 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-certificates\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671313 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36cbdaab-10af-401c-8ec0-867a5e82dc3d-ca-trust-extracted\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671338 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-trusted-ca\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671375 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-tls\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671507 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.671532 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptcdk\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-kube-api-access-ptcdk\") pod \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\" (UID: \"36cbdaab-10af-401c-8ec0-867a5e82dc3d\") " Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.672086 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.672146 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.677302 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cbdaab-10af-401c-8ec0-867a5e82dc3d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.677841 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.677918 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-kube-api-access-ptcdk" (OuterVolumeSpecName: "kube-api-access-ptcdk") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "kube-api-access-ptcdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.678071 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.684041 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.691472 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cbdaab-10af-401c-8ec0-867a5e82dc3d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "36cbdaab-10af-401c-8ec0-867a5e82dc3d" (UID: "36cbdaab-10af-401c-8ec0-867a5e82dc3d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.772769 5001 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.773134 5001 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36cbdaab-10af-401c-8ec0-867a5e82dc3d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.773198 5001 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.773253 5001 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36cbdaab-10af-401c-8ec0-867a5e82dc3d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.773302 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36cbdaab-10af-401c-8ec0-867a5e82dc3d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.773354 5001 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.773408 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptcdk\" (UniqueName: \"kubernetes.io/projected/36cbdaab-10af-401c-8ec0-867a5e82dc3d-kube-api-access-ptcdk\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.892522 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5ns7t"] Jan 28 17:22:48 crc kubenswrapper[5001]: I0128 17:22:48.897206 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5ns7t"] Jan 28 17:22:50 crc kubenswrapper[5001]: I0128 17:22:50.601893 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" path="/var/lib/kubelet/pods/36cbdaab-10af-401c-8ec0-867a5e82dc3d/volumes" Jan 28 17:23:04 crc kubenswrapper[5001]: I0128 17:23:04.834233 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:23:04 crc kubenswrapper[5001]: I0128 17:23:04.836023 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:23:04 crc kubenswrapper[5001]: I0128 17:23:04.836171 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:23:04 crc kubenswrapper[5001]: I0128 17:23:04.836907 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7faefbbab6a723fb78d92800b7de1c2ec73448a306765fa7e84130c0372ff39"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:23:04 crc kubenswrapper[5001]: I0128 17:23:04.837108 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://b7faefbbab6a723fb78d92800b7de1c2ec73448a306765fa7e84130c0372ff39" gracePeriod=600 Jan 28 17:23:05 crc kubenswrapper[5001]: I0128 17:23:05.658191 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="b7faefbbab6a723fb78d92800b7de1c2ec73448a306765fa7e84130c0372ff39" exitCode=0 Jan 28 17:23:05 crc kubenswrapper[5001]: I0128 17:23:05.658287 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"b7faefbbab6a723fb78d92800b7de1c2ec73448a306765fa7e84130c0372ff39"} Jan 28 17:23:05 crc kubenswrapper[5001]: I0128 17:23:05.658744 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"ae61fb070fc3dd351681c14090d43b7fd5a4e929a9d267dc18a26f6bd1033912"} Jan 28 17:23:05 crc kubenswrapper[5001]: I0128 17:23:05.658798 5001 scope.go:117] "RemoveContainer" containerID="baf90646cf03ac460f5bbd2f1f2595fee418a9e3c9c695b3112d327f6fdf0fc9" Jan 28 17:25:34 crc kubenswrapper[5001]: I0128 17:25:34.834954 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:25:34 crc kubenswrapper[5001]: I0128 17:25:34.836156 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:26:04 crc kubenswrapper[5001]: I0128 17:26:04.833947 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:26:04 crc kubenswrapper[5001]: I0128 17:26:04.834468 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:26:34 crc kubenswrapper[5001]: I0128 17:26:34.858929 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:26:34 crc kubenswrapper[5001]: I0128 17:26:34.860035 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:26:34 crc kubenswrapper[5001]: I0128 17:26:34.860124 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:26:34 crc kubenswrapper[5001]: I0128 17:26:34.860786 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae61fb070fc3dd351681c14090d43b7fd5a4e929a9d267dc18a26f6bd1033912"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:26:34 crc kubenswrapper[5001]: I0128 17:26:34.860840 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://ae61fb070fc3dd351681c14090d43b7fd5a4e929a9d267dc18a26f6bd1033912" gracePeriod=600 Jan 28 17:26:35 crc kubenswrapper[5001]: I0128 17:26:35.755057 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="ae61fb070fc3dd351681c14090d43b7fd5a4e929a9d267dc18a26f6bd1033912" exitCode=0 Jan 28 17:26:35 crc kubenswrapper[5001]: I0128 17:26:35.755104 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"ae61fb070fc3dd351681c14090d43b7fd5a4e929a9d267dc18a26f6bd1033912"} Jan 28 17:26:35 crc kubenswrapper[5001]: I0128 17:26:35.755482 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"bcd1bb3eeb7df10e3aeb349c79594cb4ff827106a1c1b316d9b57fdb098e8eef"} Jan 28 17:26:35 crc kubenswrapper[5001]: I0128 17:26:35.755506 5001 scope.go:117] "RemoveContainer" containerID="b7faefbbab6a723fb78d92800b7de1c2ec73448a306765fa7e84130c0372ff39" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.751720 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq"] Jan 28 17:28:25 crc kubenswrapper[5001]: E0128 17:28:25.752507 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" containerName="registry" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.752523 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" containerName="registry" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.752660 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cbdaab-10af-401c-8ec0-867a5e82dc3d" containerName="registry" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.753529 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.757228 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.764473 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.764544 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.764588 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjcvs\" (UniqueName: \"kubernetes.io/projected/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-kube-api-access-hjcvs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.767567 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq"] Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.865866 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.865953 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.866018 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjcvs\" (UniqueName: \"kubernetes.io/projected/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-kube-api-access-hjcvs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.866546 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.866723 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:25 crc kubenswrapper[5001]: I0128 17:28:25.890792 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjcvs\" (UniqueName: \"kubernetes.io/projected/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-kube-api-access-hjcvs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:26 crc kubenswrapper[5001]: I0128 17:28:26.085537 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:26 crc kubenswrapper[5001]: I0128 17:28:26.264373 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq"] Jan 28 17:28:26 crc kubenswrapper[5001]: I0128 17:28:26.347176 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" event={"ID":"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc","Type":"ContainerStarted","Data":"04d27b07a7a726e7ab137c8418afa403ce48fb3283bd4d5b552965900ba57aa1"} Jan 28 17:28:27 crc kubenswrapper[5001]: I0128 17:28:27.353425 5001 generic.go:334] "Generic (PLEG): container finished" podID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerID="fc080ab864d07070492e0de91dcd93550781bdfe77d628f2af0cb73429840147" exitCode=0 Jan 28 17:28:27 crc kubenswrapper[5001]: I0128 17:28:27.353497 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" event={"ID":"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc","Type":"ContainerDied","Data":"fc080ab864d07070492e0de91dcd93550781bdfe77d628f2af0cb73429840147"} Jan 28 17:28:27 crc kubenswrapper[5001]: I0128 17:28:27.354969 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:28:29 crc kubenswrapper[5001]: I0128 17:28:29.367028 5001 generic.go:334] "Generic (PLEG): container finished" podID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerID="7c453f4ec0d840ef5750282c30aa9fe0852cc54a05c037c92bd10d4a94f529b8" exitCode=0 Jan 28 17:28:29 crc kubenswrapper[5001]: I0128 17:28:29.367096 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" event={"ID":"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc","Type":"ContainerDied","Data":"7c453f4ec0d840ef5750282c30aa9fe0852cc54a05c037c92bd10d4a94f529b8"} Jan 28 17:28:30 crc kubenswrapper[5001]: I0128 17:28:30.375579 5001 generic.go:334] "Generic (PLEG): container finished" podID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerID="19b448eee8741383dce1ba71002bde16af38b88cb2672d0b14407b9108116a74" exitCode=0 Jan 28 17:28:30 crc kubenswrapper[5001]: I0128 17:28:30.375627 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" event={"ID":"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc","Type":"ContainerDied","Data":"19b448eee8741383dce1ba71002bde16af38b88cb2672d0b14407b9108116a74"} Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.589932 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.653791 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-bundle\") pod \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.653844 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-util\") pod \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.653881 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjcvs\" (UniqueName: \"kubernetes.io/projected/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-kube-api-access-hjcvs\") pod \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\" (UID: \"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc\") " Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.655034 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-bundle" (OuterVolumeSpecName: "bundle") pod "5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" (UID: "5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.662689 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-kube-api-access-hjcvs" (OuterVolumeSpecName: "kube-api-access-hjcvs") pod "5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" (UID: "5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc"). InnerVolumeSpecName "kube-api-access-hjcvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.668763 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-util" (OuterVolumeSpecName: "util") pod "5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" (UID: "5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.755427 5001 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-util\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.755456 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjcvs\" (UniqueName: \"kubernetes.io/projected/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-kube-api-access-hjcvs\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.755467 5001 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.837188 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvsrw"] Jan 28 17:28:31 crc kubenswrapper[5001]: E0128 17:28:31.837426 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="extract" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.837443 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="extract" Jan 28 17:28:31 crc kubenswrapper[5001]: E0128 17:28:31.837461 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="util" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.837469 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="util" Jan 28 17:28:31 crc kubenswrapper[5001]: E0128 17:28:31.837486 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="pull" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.837494 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="pull" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.837615 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc" containerName="extract" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.838387 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.854820 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvsrw"] Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.856210 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-catalog-content\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.856274 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-utilities\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.856325 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958f5\" (UniqueName: \"kubernetes.io/projected/d733ba10-ef90-4ff7-b454-0dfce6dcd010-kube-api-access-958f5\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.957612 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-utilities\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.957696 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958f5\" (UniqueName: \"kubernetes.io/projected/d733ba10-ef90-4ff7-b454-0dfce6dcd010-kube-api-access-958f5\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.957785 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-catalog-content\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.958153 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-utilities\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.958250 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-catalog-content\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:31 crc kubenswrapper[5001]: I0128 17:28:31.979523 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958f5\" (UniqueName: \"kubernetes.io/projected/d733ba10-ef90-4ff7-b454-0dfce6dcd010-kube-api-access-958f5\") pod \"redhat-operators-xvsrw\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:32 crc kubenswrapper[5001]: I0128 17:28:32.157966 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:32 crc kubenswrapper[5001]: I0128 17:28:32.347434 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvsrw"] Jan 28 17:28:32 crc kubenswrapper[5001]: W0128 17:28:32.356369 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd733ba10_ef90_4ff7_b454_0dfce6dcd010.slice/crio-b11e129c7de7769c9fb896584ceea86a460db6c27ca939443ea3fb9e238c81cb WatchSource:0}: Error finding container b11e129c7de7769c9fb896584ceea86a460db6c27ca939443ea3fb9e238c81cb: Status 404 returned error can't find the container with id b11e129c7de7769c9fb896584ceea86a460db6c27ca939443ea3fb9e238c81cb Jan 28 17:28:32 crc kubenswrapper[5001]: I0128 17:28:32.398274 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" event={"ID":"5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc","Type":"ContainerDied","Data":"04d27b07a7a726e7ab137c8418afa403ce48fb3283bd4d5b552965900ba57aa1"} Jan 28 17:28:32 crc kubenswrapper[5001]: I0128 17:28:32.398320 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d27b07a7a726e7ab137c8418afa403ce48fb3283bd4d5b552965900ba57aa1" Jan 28 17:28:32 crc kubenswrapper[5001]: I0128 17:28:32.398391 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq" Jan 28 17:28:32 crc kubenswrapper[5001]: I0128 17:28:32.399548 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerStarted","Data":"b11e129c7de7769c9fb896584ceea86a460db6c27ca939443ea3fb9e238c81cb"} Jan 28 17:28:33 crc kubenswrapper[5001]: I0128 17:28:33.406373 5001 generic.go:334] "Generic (PLEG): container finished" podID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerID="7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354" exitCode=0 Jan 28 17:28:33 crc kubenswrapper[5001]: I0128 17:28:33.406433 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerDied","Data":"7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354"} Jan 28 17:28:33 crc kubenswrapper[5001]: I0128 17:28:33.716775 5001 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.341648 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rqzx8"] Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.342359 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.343806 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.343954 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.343965 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-z596n" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.358384 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rqzx8"] Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.416228 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerStarted","Data":"30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f"} Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.485461 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxfb\" (UniqueName: \"kubernetes.io/projected/1ab14cf7-b801-43af-868b-e05478534e41-kube-api-access-zcxfb\") pod \"nmstate-operator-646758c888-rqzx8\" (UID: \"1ab14cf7-b801-43af-868b-e05478534e41\") " pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.586075 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxfb\" (UniqueName: \"kubernetes.io/projected/1ab14cf7-b801-43af-868b-e05478534e41-kube-api-access-zcxfb\") pod \"nmstate-operator-646758c888-rqzx8\" (UID: \"1ab14cf7-b801-43af-868b-e05478534e41\") " pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.605368 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxfb\" (UniqueName: \"kubernetes.io/projected/1ab14cf7-b801-43af-868b-e05478534e41-kube-api-access-zcxfb\") pod \"nmstate-operator-646758c888-rqzx8\" (UID: \"1ab14cf7-b801-43af-868b-e05478534e41\") " pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.656218 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" Jan 28 17:28:34 crc kubenswrapper[5001]: I0128 17:28:34.869668 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rqzx8"] Jan 28 17:28:34 crc kubenswrapper[5001]: W0128 17:28:34.905429 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ab14cf7_b801_43af_868b_e05478534e41.slice/crio-21c4a4ff41c399364a5974453a6673c7ff2b2c3bcaaf5758b95ca5d2e8a6c993 WatchSource:0}: Error finding container 21c4a4ff41c399364a5974453a6673c7ff2b2c3bcaaf5758b95ca5d2e8a6c993: Status 404 returned error can't find the container with id 21c4a4ff41c399364a5974453a6673c7ff2b2c3bcaaf5758b95ca5d2e8a6c993 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.422292 5001 generic.go:334] "Generic (PLEG): container finished" podID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerID="30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f" exitCode=0 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.422435 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerDied","Data":"30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f"} Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.423594 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" event={"ID":"1ab14cf7-b801-43af-868b-e05478534e41","Type":"ContainerStarted","Data":"21c4a4ff41c399364a5974453a6673c7ff2b2c3bcaaf5758b95ca5d2e8a6c993"} Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.597663 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cnffr"] Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598373 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="northd" containerID="cri-o://88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598443 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-acl-logging" containerID="cri-o://505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598410 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="sbdb" containerID="cri-o://5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598483 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="nbdb" containerID="cri-o://f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598418 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-node" containerID="cri-o://ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598415 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-controller" containerID="cri-o://311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.598448 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2" gracePeriod=30 Jan 28 17:28:35 crc kubenswrapper[5001]: I0128 17:28:35.649622 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" containerID="cri-o://2784ac440cc205327f98403767113fab2703083c63ce4cbe2fd5e230fe576b6a" gracePeriod=30 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.433688 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovnkube-controller/3.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.440301 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovn-acl-logging/0.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.441310 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovn-controller/0.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442603 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="2784ac440cc205327f98403767113fab2703083c63ce4cbe2fd5e230fe576b6a" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442630 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442639 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442646 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442654 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442660 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442666 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4" exitCode=143 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442673 5001 generic.go:334] "Generic (PLEG): container finished" podID="324b03b5-a748-440b-b1ad-15022599b855" containerID="311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1" exitCode=143 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442748 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"2784ac440cc205327f98403767113fab2703083c63ce4cbe2fd5e230fe576b6a"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442826 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442848 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442859 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442870 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442881 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442893 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442906 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.442901 5001 scope.go:117] "RemoveContainer" containerID="30cd840340f9a11c11919b906fad36cd717bf6fdb68a826917c2015a04df7e57" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.446680 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/2.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.447288 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/1.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.447361 5001 generic.go:334] "Generic (PLEG): container finished" podID="3cd579b1-57ae-4f44-85b5-53b6c746078b" containerID="ce3a9bb5672656f9b7c84139662947a20d0c4248d56a0a1cc3fa2790eed2cabf" exitCode=2 Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.447420 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerDied","Data":"ce3a9bb5672656f9b7c84139662947a20d0c4248d56a0a1cc3fa2790eed2cabf"} Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.448958 5001 scope.go:117] "RemoveContainer" containerID="ce3a9bb5672656f9b7c84139662947a20d0c4248d56a0a1cc3fa2790eed2cabf" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.502027 5001 scope.go:117] "RemoveContainer" containerID="b0c8ab8cf8afc73c6271962be74de68bfe5f1afb1a4d1725c0733393372a9fa7" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.881560 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovn-acl-logging/0.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.882134 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovn-controller/0.log" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.882458 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.930213 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xp89n"] Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.930581 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.930637 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.930686 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.930731 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.930778 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kubecfg-setup" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.930824 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kubecfg-setup" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.930900 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-acl-logging" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.930945 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-acl-logging" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931095 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931154 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931203 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-node" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931247 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-node" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931311 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="sbdb" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931369 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="sbdb" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931419 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931473 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931521 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="nbdb" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931565 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="nbdb" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931617 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="northd" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931662 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="northd" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.931711 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931766 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.931927 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-acl-logging" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932280 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovn-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932419 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="nbdb" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932472 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="northd" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932519 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932565 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-node" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932613 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932658 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="sbdb" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932712 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932787 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.932894 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.934160 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.934240 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: E0128 17:28:36.934295 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.934353 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.934491 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="324b03b5-a748-440b-b1ad-15022599b855" containerName="ovnkube-controller" Jan 28 17:28:36 crc kubenswrapper[5001]: I0128 17:28:36.936064 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.015671 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-openvswitch\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016108 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-netd\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.015804 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016147 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324b03b5-a748-440b-b1ad-15022599b855-ovn-node-metrics-cert\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016203 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-bin\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016230 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chwvf\" (UniqueName: \"kubernetes.io/projected/324b03b5-a748-440b-b1ad-15022599b855-kube-api-access-chwvf\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016236 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016283 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-netns\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016277 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016311 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-script-lib\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016337 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016372 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-slash\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016397 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-systemd-units\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016437 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-slash" (OuterVolumeSpecName: "host-slash") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016442 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-systemd\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016482 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016513 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-env-overrides\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016543 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-var-lib-openvswitch\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016584 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-log-socket\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016611 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-ovn-kubernetes\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016639 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-kubelet\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016609 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016670 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-ovn\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016633 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-log-socket" (OuterVolumeSpecName: "log-socket") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016656 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016696 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-etc-openvswitch\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016705 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016720 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-var-lib-cni-networks-ovn-kubernetes\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016725 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016729 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016747 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-config\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016760 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016760 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016766 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-node-log\") pod \"324b03b5-a748-440b-b1ad-15022599b855\" (UID: \"324b03b5-a748-440b-b1ad-15022599b855\") " Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016789 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-node-log" (OuterVolumeSpecName: "node-log") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016865 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.016997 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017190 5001 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017212 5001 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017224 5001 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017232 5001 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017241 5001 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017249 5001 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017257 5001 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017266 5001 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017273 5001 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017282 5001 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017292 5001 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017300 5001 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017311 5001 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017319 5001 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017327 5001 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017335 5001 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.017343 5001 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/324b03b5-a748-440b-b1ad-15022599b855-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.026556 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324b03b5-a748-440b-b1ad-15022599b855-kube-api-access-chwvf" (OuterVolumeSpecName: "kube-api-access-chwvf") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "kube-api-access-chwvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.032394 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.040448 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324b03b5-a748-440b-b1ad-15022599b855-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "324b03b5-a748-440b-b1ad-15022599b855" (UID: "324b03b5-a748-440b-b1ad-15022599b855"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118244 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118299 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-var-lib-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118324 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-ovn\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118348 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovn-node-metrics-cert\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118373 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-systemd-units\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118390 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-cni-bin\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118405 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-node-log\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118422 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-cni-netd\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118439 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118492 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovnkube-script-lib\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118507 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-run-netns\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118524 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-run-ovn-kubernetes\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118542 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-slash\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118567 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovnkube-config\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118581 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-kubelet\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118601 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-log-socket\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118619 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-etc-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118637 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-env-overrides\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118654 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-systemd\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118672 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8h5\" (UniqueName: \"kubernetes.io/projected/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-kube-api-access-wn8h5\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118708 5001 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324b03b5-a748-440b-b1ad-15022599b855-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118719 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chwvf\" (UniqueName: \"kubernetes.io/projected/324b03b5-a748-440b-b1ad-15022599b855-kube-api-access-chwvf\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.118728 5001 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/324b03b5-a748-440b-b1ad-15022599b855-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.219916 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8h5\" (UniqueName: \"kubernetes.io/projected/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-kube-api-access-wn8h5\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.219986 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220115 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-var-lib-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220058 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220188 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-ovn\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220264 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-ovn\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220316 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-var-lib-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220371 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovn-node-metrics-cert\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220399 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-systemd-units\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220416 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-cni-bin\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220435 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-node-log\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220458 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-cni-netd\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220471 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-systemd-units\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220478 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-node-log\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220504 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220480 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220526 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-cni-bin\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220551 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovnkube-script-lib\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220534 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-cni-netd\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220586 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-run-netns\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220614 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-run-ovn-kubernetes\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220642 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-slash\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220686 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovnkube-config\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220703 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-kubelet\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220738 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-log-socket\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220757 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-etc-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220776 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-env-overrides\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220801 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-systemd\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220921 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-run-systemd\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220948 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-run-ovn-kubernetes\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220969 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-slash\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.220643 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-run-netns\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.221265 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovnkube-script-lib\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.221336 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-log-socket\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.221362 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-host-kubelet\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.221383 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-etc-openvswitch\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.221623 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovnkube-config\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.221685 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-env-overrides\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.223914 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-ovn-node-metrics-cert\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.234014 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8h5\" (UniqueName: \"kubernetes.io/projected/7b9b8c1b-510b-4d07-9618-cc8b7b12e509-kube-api-access-wn8h5\") pod \"ovnkube-node-xp89n\" (UID: \"7b9b8c1b-510b-4d07-9618-cc8b7b12e509\") " pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.249879 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.456872 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7fgxj_3cd579b1-57ae-4f44-85b5-53b6c746078b/kube-multus/2.log" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.456997 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fgxj" event={"ID":"3cd579b1-57ae-4f44-85b5-53b6c746078b","Type":"ContainerStarted","Data":"46bec173197e9ac4c1e06f8f07aeb14e8b5b44e1b152ff7ef3583666b35f977a"} Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.460905 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerStarted","Data":"2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b"} Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.464014 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"be99334221fff3185b8de887976d663bc87af1fcfd38e4d59f91fe0fdc44618e"} Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.473031 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovn-acl-logging/0.log" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.473568 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cnffr_324b03b5-a748-440b-b1ad-15022599b855/ovn-controller/0.log" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.474034 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" event={"ID":"324b03b5-a748-440b-b1ad-15022599b855","Type":"ContainerDied","Data":"3a545b66a10a7b355086626a2562795ad59ca42ed460cf9800b4d0de3b86ca5a"} Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.474109 5001 scope.go:117] "RemoveContainer" containerID="2784ac440cc205327f98403767113fab2703083c63ce4cbe2fd5e230fe576b6a" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.474171 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cnffr" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.498445 5001 scope.go:117] "RemoveContainer" containerID="5cf0046eff41662f1bd5311d5b6c792f0f03ab3b210e586e30da7cf935a882cb" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.507284 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvsrw" podStartSLOduration=3.489886031 podStartE2EDuration="6.507264533s" podCreationTimestamp="2026-01-28 17:28:31 +0000 UTC" firstStartedPulling="2026-01-28 17:28:33.408736928 +0000 UTC m=+759.576525158" lastFinishedPulling="2026-01-28 17:28:36.42611544 +0000 UTC m=+762.593903660" observedRunningTime="2026-01-28 17:28:37.503857194 +0000 UTC m=+763.671645424" watchObservedRunningTime="2026-01-28 17:28:37.507264533 +0000 UTC m=+763.675052763" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.514734 5001 scope.go:117] "RemoveContainer" containerID="f866e32421a464224dc5644a08010bb2bbf017f0aad0ca85f0ec1a19175b3176" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.528451 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cnffr"] Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.533029 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cnffr"] Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.586518 5001 scope.go:117] "RemoveContainer" containerID="88db1fb42cc131d79b25cec4e926c6ea6e3f490e61a269166058d8442c45c358" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.599365 5001 scope.go:117] "RemoveContainer" containerID="425a18c5498c309445c561abe74355f8a9abde79d195ce77a0ff81ae4fb27eb2" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.613007 5001 scope.go:117] "RemoveContainer" containerID="ba96247cc50d90221d20caaa3eb66279a396950079b96069c48f373a445a52df" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.628054 5001 scope.go:117] "RemoveContainer" containerID="505516bc8beff0cab3e78bc9e3a72291e0aae43bfbd60e04bafd42625f688bd4" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.643311 5001 scope.go:117] "RemoveContainer" containerID="311e47dae8da306332944778e44d677b38137cff913d8579438246c14ab265b1" Jan 28 17:28:37 crc kubenswrapper[5001]: I0128 17:28:37.658930 5001 scope.go:117] "RemoveContainer" containerID="8cd6dbbfecd8dde08ff390b2876f63f700388b42ed82197fd90c22c928adb1c6" Jan 28 17:28:38 crc kubenswrapper[5001]: I0128 17:28:38.482283 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" event={"ID":"1ab14cf7-b801-43af-868b-e05478534e41","Type":"ContainerStarted","Data":"304925dd717ddb7b9b7180bf98365e38594413f0c33578e557f168fd4af3d520"} Jan 28 17:28:38 crc kubenswrapper[5001]: I0128 17:28:38.486160 5001 generic.go:334] "Generic (PLEG): container finished" podID="7b9b8c1b-510b-4d07-9618-cc8b7b12e509" containerID="21f83e03588eba0504012d21bfc23730c82b9852e787492e6cdc0521623903fb" exitCode=0 Jan 28 17:28:38 crc kubenswrapper[5001]: I0128 17:28:38.486256 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerDied","Data":"21f83e03588eba0504012d21bfc23730c82b9852e787492e6cdc0521623903fb"} Jan 28 17:28:38 crc kubenswrapper[5001]: I0128 17:28:38.505606 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-rqzx8" podStartSLOduration=2.00736511 podStartE2EDuration="4.505585601s" podCreationTimestamp="2026-01-28 17:28:34 +0000 UTC" firstStartedPulling="2026-01-28 17:28:34.90811747 +0000 UTC m=+761.075905700" lastFinishedPulling="2026-01-28 17:28:37.406337961 +0000 UTC m=+763.574126191" observedRunningTime="2026-01-28 17:28:38.503603773 +0000 UTC m=+764.671392003" watchObservedRunningTime="2026-01-28 17:28:38.505585601 +0000 UTC m=+764.673373831" Jan 28 17:28:38 crc kubenswrapper[5001]: I0128 17:28:38.601107 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324b03b5-a748-440b-b1ad-15022599b855" path="/var/lib/kubelet/pods/324b03b5-a748-440b-b1ad-15022599b855/volumes" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.493528 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"dd28b9bcf844de31ab4e9e0763c191c980f58848ecf0257f768859b2c11426c8"} Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.493858 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"1f3adfa1a794c708b986fc6ca4767b1730533ff56aa71b59c3f9a76013049723"} Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.493878 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"d3c93e350642f526b5ee73793d497bb20f766ed1d09dffc2b5c0d8fdbc472a40"} Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.493891 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"b576e2b6d20519fb5eeb2491a9a56e59e44f9b5e3ab70541f98ec420982dfba3"} Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.493903 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"b52d6675fd7a13fa4581d4deb1d79f4498742622b44aa4c561709b4fa78f13ca"} Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.493912 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"657ae04d3d0e8d6257c76637e8322a6799ff7761abe750a2b5144654f8584542"} Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.555363 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-t9m4t"] Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.556284 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.559487 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vs2s6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.573345 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6"] Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.574178 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.578256 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.598371 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v9mzz"] Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.599222 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650688 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-dbus-socket\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650746 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmj7x\" (UniqueName: \"kubernetes.io/projected/a29a6746-0c3a-4887-ac29-530b8771c1dc-kube-api-access-dmj7x\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650797 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd582\" (UniqueName: \"kubernetes.io/projected/4defca7c-f2c4-428f-b722-a1e9895e42fe-kube-api-access-xd582\") pod \"nmstate-metrics-54757c584b-t9m4t\" (UID: \"4defca7c-f2c4-428f-b722-a1e9895e42fe\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650828 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-nmstate-lock\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650849 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a29a6746-0c3a-4887-ac29-530b8771c1dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650923 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlq5p\" (UniqueName: \"kubernetes.io/projected/f75ca53f-11de-4a98-93dc-0f269011b505-kube-api-access-qlq5p\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.650945 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-ovs-socket\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.719848 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl"] Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.720514 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.725564 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.725623 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4wxwn" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.727852 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752232 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlq5p\" (UniqueName: \"kubernetes.io/projected/f75ca53f-11de-4a98-93dc-0f269011b505-kube-api-access-qlq5p\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752281 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-ovs-socket\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752319 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6aa45dc9-1c42-4dbc-b421-cd87505ab222-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752365 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-dbus-socket\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752386 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4bq\" (UniqueName: \"kubernetes.io/projected/6aa45dc9-1c42-4dbc-b421-cd87505ab222-kube-api-access-ks4bq\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752404 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa45dc9-1c42-4dbc-b421-cd87505ab222-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752421 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmj7x\" (UniqueName: \"kubernetes.io/projected/a29a6746-0c3a-4887-ac29-530b8771c1dc-kube-api-access-dmj7x\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752444 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd582\" (UniqueName: \"kubernetes.io/projected/4defca7c-f2c4-428f-b722-a1e9895e42fe-kube-api-access-xd582\") pod \"nmstate-metrics-54757c584b-t9m4t\" (UID: \"4defca7c-f2c4-428f-b722-a1e9895e42fe\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752445 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-ovs-socket\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752493 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-nmstate-lock\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752533 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a29a6746-0c3a-4887-ac29-530b8771c1dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752578 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-nmstate-lock\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.752625 5001 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.752668 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a29a6746-0c3a-4887-ac29-530b8771c1dc-tls-key-pair podName:a29a6746-0c3a-4887-ac29-530b8771c1dc nodeName:}" failed. No retries permitted until 2026-01-28 17:28:40.252651234 +0000 UTC m=+766.420439464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a29a6746-0c3a-4887-ac29-530b8771c1dc-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-fmrf6" (UID: "a29a6746-0c3a-4887-ac29-530b8771c1dc") : secret "openshift-nmstate-webhook" not found Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.752670 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f75ca53f-11de-4a98-93dc-0f269011b505-dbus-socket\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.774139 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmj7x\" (UniqueName: \"kubernetes.io/projected/a29a6746-0c3a-4887-ac29-530b8771c1dc-kube-api-access-dmj7x\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.789306 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd582\" (UniqueName: \"kubernetes.io/projected/4defca7c-f2c4-428f-b722-a1e9895e42fe-kube-api-access-xd582\") pod \"nmstate-metrics-54757c584b-t9m4t\" (UID: \"4defca7c-f2c4-428f-b722-a1e9895e42fe\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.789770 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlq5p\" (UniqueName: \"kubernetes.io/projected/f75ca53f-11de-4a98-93dc-0f269011b505-kube-api-access-qlq5p\") pod \"nmstate-handler-v9mzz\" (UID: \"f75ca53f-11de-4a98-93dc-0f269011b505\") " pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.853489 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6aa45dc9-1c42-4dbc-b421-cd87505ab222-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.853581 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4bq\" (UniqueName: \"kubernetes.io/projected/6aa45dc9-1c42-4dbc-b421-cd87505ab222-kube-api-access-ks4bq\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.853608 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa45dc9-1c42-4dbc-b421-cd87505ab222-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.853758 5001 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.853818 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6aa45dc9-1c42-4dbc-b421-cd87505ab222-plugin-serving-cert podName:6aa45dc9-1c42-4dbc-b421-cd87505ab222 nodeName:}" failed. No retries permitted until 2026-01-28 17:28:40.353799512 +0000 UTC m=+766.521587742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/6aa45dc9-1c42-4dbc-b421-cd87505ab222-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-gwvwl" (UID: "6aa45dc9-1c42-4dbc-b421-cd87505ab222") : secret "plugin-serving-cert" not found Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.855097 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6aa45dc9-1c42-4dbc-b421-cd87505ab222-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.872707 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.879215 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4bq\" (UniqueName: \"kubernetes.io/projected/6aa45dc9-1c42-4dbc-b421-cd87505ab222-kube-api-access-ks4bq\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.897804 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(2b1adfd78667a692747d5f8756460af26f6b92b363d9eb0228836188168cf47e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.898212 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(2b1adfd78667a692747d5f8756460af26f6b92b363d9eb0228836188168cf47e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.898231 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(2b1adfd78667a692747d5f8756460af26f6b92b363d9eb0228836188168cf47e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:39 crc kubenswrapper[5001]: E0128 17:28:39.898274 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-t9m4t_openshift-nmstate(4defca7c-f2c4-428f-b722-a1e9895e42fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-t9m4t_openshift-nmstate(4defca7c-f2c4-428f-b722-a1e9895e42fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(2b1adfd78667a692747d5f8756460af26f6b92b363d9eb0228836188168cf47e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" podUID="4defca7c-f2c4-428f-b722-a1e9895e42fe" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.914499 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7589cd8bf7-f96zs"] Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.915166 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:39 crc kubenswrapper[5001]: I0128 17:28:39.915864 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.055961 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-console-config\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.056045 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ae8b8-b047-4412-aa99-928155d7df88-console-serving-cert\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.056071 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-oauth-serving-cert\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.056085 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-service-ca\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.056105 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ae8b8-b047-4412-aa99-928155d7df88-console-oauth-config\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.056126 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-trusted-ca-bundle\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.056147 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmksk\" (UniqueName: \"kubernetes.io/projected/a92ae8b8-b047-4412-aa99-928155d7df88-kube-api-access-tmksk\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157184 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-trusted-ca-bundle\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157231 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmksk\" (UniqueName: \"kubernetes.io/projected/a92ae8b8-b047-4412-aa99-928155d7df88-kube-api-access-tmksk\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157287 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-console-config\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157329 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ae8b8-b047-4412-aa99-928155d7df88-console-serving-cert\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157348 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-service-ca\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157361 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-oauth-serving-cert\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.157379 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ae8b8-b047-4412-aa99-928155d7df88-console-oauth-config\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.158789 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-service-ca\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.158795 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-console-config\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.158882 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-oauth-serving-cert\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.159160 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92ae8b8-b047-4412-aa99-928155d7df88-trusted-ca-bundle\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.160840 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a92ae8b8-b047-4412-aa99-928155d7df88-console-oauth-config\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.161160 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a92ae8b8-b047-4412-aa99-928155d7df88-console-serving-cert\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.173520 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmksk\" (UniqueName: \"kubernetes.io/projected/a92ae8b8-b047-4412-aa99-928155d7df88-kube-api-access-tmksk\") pod \"console-7589cd8bf7-f96zs\" (UID: \"a92ae8b8-b047-4412-aa99-928155d7df88\") " pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.254875 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.258619 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a29a6746-0c3a-4887-ac29-530b8771c1dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.262695 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a29a6746-0c3a-4887-ac29-530b8771c1dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fmrf6\" (UID: \"a29a6746-0c3a-4887-ac29-530b8771c1dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.273624 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(08cae682346252b4321266528bc63d59464071c879f65add509204cce1e67768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.273694 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(08cae682346252b4321266528bc63d59464071c879f65add509204cce1e67768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.273716 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(08cae682346252b4321266528bc63d59464071c879f65add509204cce1e67768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.273762 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-7589cd8bf7-f96zs_openshift-console(a92ae8b8-b047-4412-aa99-928155d7df88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-7589cd8bf7-f96zs_openshift-console(a92ae8b8-b047-4412-aa99-928155d7df88)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(08cae682346252b4321266528bc63d59464071c879f65add509204cce1e67768): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-7589cd8bf7-f96zs" podUID="a92ae8b8-b047-4412-aa99-928155d7df88" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.359772 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa45dc9-1c42-4dbc-b421-cd87505ab222-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.362918 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa45dc9-1c42-4dbc-b421-cd87505ab222-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwvwl\" (UID: \"6aa45dc9-1c42-4dbc-b421-cd87505ab222\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.493349 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.498505 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v9mzz" event={"ID":"f75ca53f-11de-4a98-93dc-0f269011b505","Type":"ContainerStarted","Data":"92c6b878fb721f80a9e8a51f66e11ebe90994367c44be60eaf66f439ba26d9d1"} Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.515558 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(cf5635b60c83c43a7c451fedb4874e59080517b6fe530374bc87378a4a037004): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.515668 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(cf5635b60c83c43a7c451fedb4874e59080517b6fe530374bc87378a4a037004): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.515695 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(cf5635b60c83c43a7c451fedb4874e59080517b6fe530374bc87378a4a037004): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.515750 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate(a29a6746-0c3a-4887-ac29-530b8771c1dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate(a29a6746-0c3a-4887-ac29-530b8771c1dc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(cf5635b60c83c43a7c451fedb4874e59080517b6fe530374bc87378a4a037004): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" podUID="a29a6746-0c3a-4887-ac29-530b8771c1dc" Jan 28 17:28:40 crc kubenswrapper[5001]: I0128 17:28:40.633140 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.656777 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(773eca4e94eb1eb984b10461595cfda1cb3deda0257c7e35cca31653d285857e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.656852 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(773eca4e94eb1eb984b10461595cfda1cb3deda0257c7e35cca31653d285857e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.656872 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(773eca4e94eb1eb984b10461595cfda1cb3deda0257c7e35cca31653d285857e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:40 crc kubenswrapper[5001]: E0128 17:28:40.656921 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate(6aa45dc9-1c42-4dbc-b421-cd87505ab222)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate(6aa45dc9-1c42-4dbc-b421-cd87505ab222)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(773eca4e94eb1eb984b10461595cfda1cb3deda0257c7e35cca31653d285857e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" podUID="6aa45dc9-1c42-4dbc-b421-cd87505ab222" Jan 28 17:28:42 crc kubenswrapper[5001]: I0128 17:28:42.158636 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:42 crc kubenswrapper[5001]: I0128 17:28:42.159071 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:42 crc kubenswrapper[5001]: I0128 17:28:42.203558 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:42 crc kubenswrapper[5001]: I0128 17:28:42.516952 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"c9c923a4302809d8b1828dba4a90f0001fd9d2ec0d88fa34edad743ca0d4a2e6"} Jan 28 17:28:42 crc kubenswrapper[5001]: I0128 17:28:42.555850 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:42 crc kubenswrapper[5001]: I0128 17:28:42.592342 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvsrw"] Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.528316 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v9mzz" event={"ID":"f75ca53f-11de-4a98-93dc-0f269011b505","Type":"ContainerStarted","Data":"d097f1255d22c6b1eb9c6b0dd6939bb708af1da85a346e2afd052e2daeedeaf2"} Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.528848 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.535504 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" event={"ID":"7b9b8c1b-510b-4d07-9618-cc8b7b12e509","Type":"ContainerStarted","Data":"0f7ac94f6ff84e81b713eff949fb40d558293adb308d1f80fb6ecb174f8f2e67"} Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.535676 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xvsrw" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="registry-server" containerID="cri-o://2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b" gracePeriod=2 Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.536117 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.545176 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v9mzz" podStartSLOduration=1.339664865 podStartE2EDuration="5.545161047s" podCreationTimestamp="2026-01-28 17:28:39 +0000 UTC" firstStartedPulling="2026-01-28 17:28:39.938670657 +0000 UTC m=+766.106458897" lastFinishedPulling="2026-01-28 17:28:44.144166839 +0000 UTC m=+770.311955079" observedRunningTime="2026-01-28 17:28:44.542648484 +0000 UTC m=+770.710436714" watchObservedRunningTime="2026-01-28 17:28:44.545161047 +0000 UTC m=+770.712949307" Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.571614 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" podStartSLOduration=8.571594575 podStartE2EDuration="8.571594575s" podCreationTimestamp="2026-01-28 17:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:28:44.569772212 +0000 UTC m=+770.737560442" watchObservedRunningTime="2026-01-28 17:28:44.571594575 +0000 UTC m=+770.739382815" Jan 28 17:28:44 crc kubenswrapper[5001]: I0128 17:28:44.576526 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.064854 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-t9m4t"] Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.064992 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.065437 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.078394 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl"] Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.078522 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.078903 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.098018 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7589cd8bf7-f96zs"] Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.098170 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.098657 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.101224 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6"] Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.101346 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.101780 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.103023 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(c8b3d9e970340dd09025c9df1e3d6cceedbb06485b123b1b0107b4f63d9c7d32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.103072 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(c8b3d9e970340dd09025c9df1e3d6cceedbb06485b123b1b0107b4f63d9c7d32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.103099 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(c8b3d9e970340dd09025c9df1e3d6cceedbb06485b123b1b0107b4f63d9c7d32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.103140 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate(6aa45dc9-1c42-4dbc-b421-cd87505ab222)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate(6aa45dc9-1c42-4dbc-b421-cd87505ab222)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-gwvwl_openshift-nmstate_6aa45dc9-1c42-4dbc-b421-cd87505ab222_0(c8b3d9e970340dd09025c9df1e3d6cceedbb06485b123b1b0107b4f63d9c7d32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" podUID="6aa45dc9-1c42-4dbc-b421-cd87505ab222" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.112408 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(da0a49ba6fccd27c5bebc5c2fe7d12ea1016a445ae8c9780f0540c871eeb9c34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.112475 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(da0a49ba6fccd27c5bebc5c2fe7d12ea1016a445ae8c9780f0540c871eeb9c34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.112494 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(da0a49ba6fccd27c5bebc5c2fe7d12ea1016a445ae8c9780f0540c871eeb9c34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.112538 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-t9m4t_openshift-nmstate(4defca7c-f2c4-428f-b722-a1e9895e42fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-t9m4t_openshift-nmstate(4defca7c-f2c4-428f-b722-a1e9895e42fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-t9m4t_openshift-nmstate_4defca7c-f2c4-428f-b722-a1e9895e42fe_0(da0a49ba6fccd27c5bebc5c2fe7d12ea1016a445ae8c9780f0540c871eeb9c34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" podUID="4defca7c-f2c4-428f-b722-a1e9895e42fe" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.140551 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(9649528cedf41877404c2db2744b748a49e68f067c3a83e93c0806fa4e88db44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.140619 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(9649528cedf41877404c2db2744b748a49e68f067c3a83e93c0806fa4e88db44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.140644 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(9649528cedf41877404c2db2744b748a49e68f067c3a83e93c0806fa4e88db44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.140687 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-7589cd8bf7-f96zs_openshift-console(a92ae8b8-b047-4412-aa99-928155d7df88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-7589cd8bf7-f96zs_openshift-console(a92ae8b8-b047-4412-aa99-928155d7df88)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-7589cd8bf7-f96zs_openshift-console_a92ae8b8-b047-4412-aa99-928155d7df88_0(9649528cedf41877404c2db2744b748a49e68f067c3a83e93c0806fa4e88db44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-7589cd8bf7-f96zs" podUID="a92ae8b8-b047-4412-aa99-928155d7df88" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.145256 5001 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(fee4fffa954779998b6b59c52be67ca592c946a35924fe572738fe0b7e79cbe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.145305 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(fee4fffa954779998b6b59c52be67ca592c946a35924fe572738fe0b7e79cbe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.145323 5001 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(fee4fffa954779998b6b59c52be67ca592c946a35924fe572738fe0b7e79cbe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:45 crc kubenswrapper[5001]: E0128 17:28:45.145390 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate(a29a6746-0c3a-4887-ac29-530b8771c1dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate(a29a6746-0c3a-4887-ac29-530b8771c1dc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-fmrf6_openshift-nmstate_a29a6746-0c3a-4887-ac29-530b8771c1dc_0(fee4fffa954779998b6b59c52be67ca592c946a35924fe572738fe0b7e79cbe4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" podUID="a29a6746-0c3a-4887-ac29-530b8771c1dc" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.542277 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.542593 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:45 crc kubenswrapper[5001]: I0128 17:28:45.567911 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.200083 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.349093 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-utilities\") pod \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.349215 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-catalog-content\") pod \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.349253 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-958f5\" (UniqueName: \"kubernetes.io/projected/d733ba10-ef90-4ff7-b454-0dfce6dcd010-kube-api-access-958f5\") pod \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\" (UID: \"d733ba10-ef90-4ff7-b454-0dfce6dcd010\") " Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.349818 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-utilities" (OuterVolumeSpecName: "utilities") pod "d733ba10-ef90-4ff7-b454-0dfce6dcd010" (UID: "d733ba10-ef90-4ff7-b454-0dfce6dcd010"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.362237 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d733ba10-ef90-4ff7-b454-0dfce6dcd010-kube-api-access-958f5" (OuterVolumeSpecName: "kube-api-access-958f5") pod "d733ba10-ef90-4ff7-b454-0dfce6dcd010" (UID: "d733ba10-ef90-4ff7-b454-0dfce6dcd010"). InnerVolumeSpecName "kube-api-access-958f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.450792 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-958f5\" (UniqueName: \"kubernetes.io/projected/d733ba10-ef90-4ff7-b454-0dfce6dcd010-kube-api-access-958f5\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.450849 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.467544 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d733ba10-ef90-4ff7-b454-0dfce6dcd010" (UID: "d733ba10-ef90-4ff7-b454-0dfce6dcd010"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.547739 5001 generic.go:334] "Generic (PLEG): container finished" podID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerID="2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b" exitCode=0 Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.547811 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvsrw" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.547851 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerDied","Data":"2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b"} Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.547904 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvsrw" event={"ID":"d733ba10-ef90-4ff7-b454-0dfce6dcd010","Type":"ContainerDied","Data":"b11e129c7de7769c9fb896584ceea86a460db6c27ca939443ea3fb9e238c81cb"} Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.547935 5001 scope.go:117] "RemoveContainer" containerID="2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.552143 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d733ba10-ef90-4ff7-b454-0dfce6dcd010-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.564853 5001 scope.go:117] "RemoveContainer" containerID="30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.573739 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvsrw"] Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.577499 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xvsrw"] Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.600686 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" path="/var/lib/kubelet/pods/d733ba10-ef90-4ff7-b454-0dfce6dcd010/volumes" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.600801 5001 scope.go:117] "RemoveContainer" containerID="7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.623239 5001 scope.go:117] "RemoveContainer" containerID="2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b" Jan 28 17:28:46 crc kubenswrapper[5001]: E0128 17:28:46.623811 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b\": container with ID starting with 2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b not found: ID does not exist" containerID="2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.623857 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b"} err="failed to get container status \"2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b\": rpc error: code = NotFound desc = could not find container \"2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b\": container with ID starting with 2148a2d1ecad9772df1741f7b61f93047bb328ef8ea536be2f70485f64657c1b not found: ID does not exist" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.623894 5001 scope.go:117] "RemoveContainer" containerID="30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f" Jan 28 17:28:46 crc kubenswrapper[5001]: E0128 17:28:46.624468 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f\": container with ID starting with 30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f not found: ID does not exist" containerID="30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.624499 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f"} err="failed to get container status \"30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f\": rpc error: code = NotFound desc = could not find container \"30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f\": container with ID starting with 30d5db8927ae275f8feeaeac8ccaaf593e25ead56963fe615fe64710f99ad06f not found: ID does not exist" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.624520 5001 scope.go:117] "RemoveContainer" containerID="7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354" Jan 28 17:28:46 crc kubenswrapper[5001]: E0128 17:28:46.624792 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354\": container with ID starting with 7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354 not found: ID does not exist" containerID="7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354" Jan 28 17:28:46 crc kubenswrapper[5001]: I0128 17:28:46.624824 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354"} err="failed to get container status \"7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354\": rpc error: code = NotFound desc = could not find container \"7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354\": container with ID starting with 7731461bb051dc26b7d4aef81f72ec7c6858bb54888281a72e646494eb1a9354 not found: ID does not exist" Jan 28 17:28:49 crc kubenswrapper[5001]: I0128 17:28:49.936425 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v9mzz" Jan 28 17:28:55 crc kubenswrapper[5001]: I0128 17:28:55.593276 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:55 crc kubenswrapper[5001]: I0128 17:28:55.594299 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" Jan 28 17:28:55 crc kubenswrapper[5001]: I0128 17:28:55.978959 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-t9m4t"] Jan 28 17:28:56 crc kubenswrapper[5001]: I0128 17:28:56.593208 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:56 crc kubenswrapper[5001]: I0128 17:28:56.593645 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:56 crc kubenswrapper[5001]: I0128 17:28:56.593780 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:56 crc kubenswrapper[5001]: I0128 17:28:56.594173 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:28:56 crc kubenswrapper[5001]: I0128 17:28:56.601892 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" event={"ID":"4defca7c-f2c4-428f-b722-a1e9895e42fe","Type":"ContainerStarted","Data":"f9454b3025df5890556aac3dde67bbbe708c3fc182ad67c9181c6261431ca487"} Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.010554 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6"] Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.014025 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7589cd8bf7-f96zs"] Jan 28 17:28:57 crc kubenswrapper[5001]: W0128 17:28:57.024885 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda29a6746_0c3a_4887_ac29_530b8771c1dc.slice/crio-b0882e771c8627511df9f5a5bda58ae43b7463957e125f6f2666b9e2b114fecf WatchSource:0}: Error finding container b0882e771c8627511df9f5a5bda58ae43b7463957e125f6f2666b9e2b114fecf: Status 404 returned error can't find the container with id b0882e771c8627511df9f5a5bda58ae43b7463957e125f6f2666b9e2b114fecf Jan 28 17:28:57 crc kubenswrapper[5001]: W0128 17:28:57.033546 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda92ae8b8_b047_4412_aa99_928155d7df88.slice/crio-1cacb47c7afb0cba6df7d56be99838318765a095001ffdbd0ee29261fe7df2ba WatchSource:0}: Error finding container 1cacb47c7afb0cba6df7d56be99838318765a095001ffdbd0ee29261fe7df2ba: Status 404 returned error can't find the container with id 1cacb47c7afb0cba6df7d56be99838318765a095001ffdbd0ee29261fe7df2ba Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.609218 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" event={"ID":"a29a6746-0c3a-4887-ac29-530b8771c1dc","Type":"ContainerStarted","Data":"b0882e771c8627511df9f5a5bda58ae43b7463957e125f6f2666b9e2b114fecf"} Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.611016 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7589cd8bf7-f96zs" event={"ID":"a92ae8b8-b047-4412-aa99-928155d7df88","Type":"ContainerStarted","Data":"dc63c6c546f5878fb86e3d18e7081d1ae678c55c71b160e64d15bfbaf954af6c"} Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.611062 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7589cd8bf7-f96zs" event={"ID":"a92ae8b8-b047-4412-aa99-928155d7df88","Type":"ContainerStarted","Data":"1cacb47c7afb0cba6df7d56be99838318765a095001ffdbd0ee29261fe7df2ba"} Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.613070 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" event={"ID":"4defca7c-f2c4-428f-b722-a1e9895e42fe","Type":"ContainerStarted","Data":"3b654f9b226d523cd17d210c63657afa9e5331b00b0a5eed4067f7996a78ce1d"} Jan 28 17:28:57 crc kubenswrapper[5001]: I0128 17:28:57.638856 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7589cd8bf7-f96zs" podStartSLOduration=18.638841346 podStartE2EDuration="18.638841346s" podCreationTimestamp="2026-01-28 17:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:28:57.636643002 +0000 UTC m=+783.804431292" watchObservedRunningTime="2026-01-28 17:28:57.638841346 +0000 UTC m=+783.806629576" Jan 28 17:28:58 crc kubenswrapper[5001]: I0128 17:28:58.593988 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:58 crc kubenswrapper[5001]: I0128 17:28:58.594485 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" Jan 28 17:28:58 crc kubenswrapper[5001]: I0128 17:28:58.619479 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" event={"ID":"a29a6746-0c3a-4887-ac29-530b8771c1dc","Type":"ContainerStarted","Data":"98c8a6e6c4ccc2d47068a9d4f2c28b074a926a7c79134915de1f1e7004417611"} Jan 28 17:28:58 crc kubenswrapper[5001]: I0128 17:28:58.643949 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" podStartSLOduration=18.763557248 podStartE2EDuration="19.643926564s" podCreationTimestamp="2026-01-28 17:28:39 +0000 UTC" firstStartedPulling="2026-01-28 17:28:57.027118454 +0000 UTC m=+783.194906684" lastFinishedPulling="2026-01-28 17:28:57.90748773 +0000 UTC m=+784.075276000" observedRunningTime="2026-01-28 17:28:58.635238141 +0000 UTC m=+784.803026391" watchObservedRunningTime="2026-01-28 17:28:58.643926564 +0000 UTC m=+784.811714794" Jan 28 17:28:58 crc kubenswrapper[5001]: I0128 17:28:58.805573 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl"] Jan 28 17:28:59 crc kubenswrapper[5001]: W0128 17:28:59.099410 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aa45dc9_1c42_4dbc_b421_cd87505ab222.slice/crio-806b24042c51b28ad1d89714c990f911af45eb5821ebef3799c6cd5f211b8dfd WatchSource:0}: Error finding container 806b24042c51b28ad1d89714c990f911af45eb5821ebef3799c6cd5f211b8dfd: Status 404 returned error can't find the container with id 806b24042c51b28ad1d89714c990f911af45eb5821ebef3799c6cd5f211b8dfd Jan 28 17:28:59 crc kubenswrapper[5001]: I0128 17:28:59.628359 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" event={"ID":"6aa45dc9-1c42-4dbc-b421-cd87505ab222","Type":"ContainerStarted","Data":"806b24042c51b28ad1d89714c990f911af45eb5821ebef3799c6cd5f211b8dfd"} Jan 28 17:28:59 crc kubenswrapper[5001]: I0128 17:28:59.630999 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" event={"ID":"4defca7c-f2c4-428f-b722-a1e9895e42fe","Type":"ContainerStarted","Data":"6f00d28615505277b996f239b4f336518c070291388b7b5c08e0be0bf8e4e363"} Jan 28 17:28:59 crc kubenswrapper[5001]: I0128 17:28:59.631141 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:28:59 crc kubenswrapper[5001]: I0128 17:28:59.650259 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-t9m4t" podStartSLOduration=17.472844512 podStartE2EDuration="20.650233768s" podCreationTimestamp="2026-01-28 17:28:39 +0000 UTC" firstStartedPulling="2026-01-28 17:28:55.990599072 +0000 UTC m=+782.158387302" lastFinishedPulling="2026-01-28 17:28:59.167988328 +0000 UTC m=+785.335776558" observedRunningTime="2026-01-28 17:28:59.648174938 +0000 UTC m=+785.815963188" watchObservedRunningTime="2026-01-28 17:28:59.650233768 +0000 UTC m=+785.818021998" Jan 28 17:29:00 crc kubenswrapper[5001]: I0128 17:29:00.255652 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:29:00 crc kubenswrapper[5001]: I0128 17:29:00.256319 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:29:00 crc kubenswrapper[5001]: I0128 17:29:00.262795 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:29:00 crc kubenswrapper[5001]: I0128 17:29:00.639614 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7589cd8bf7-f96zs" Jan 28 17:29:00 crc kubenswrapper[5001]: I0128 17:29:00.688444 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xbj9"] Jan 28 17:29:02 crc kubenswrapper[5001]: I0128 17:29:02.649500 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" event={"ID":"6aa45dc9-1c42-4dbc-b421-cd87505ab222","Type":"ContainerStarted","Data":"65edffa19f248a67198097bc13e4a840e850a8d0456371a13947d6228aaadfa1"} Jan 28 17:29:02 crc kubenswrapper[5001]: I0128 17:29:02.666307 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwvwl" podStartSLOduration=21.24445297 podStartE2EDuration="23.666282857s" podCreationTimestamp="2026-01-28 17:28:39 +0000 UTC" firstStartedPulling="2026-01-28 17:28:59.101424784 +0000 UTC m=+785.269213014" lastFinishedPulling="2026-01-28 17:29:01.523254671 +0000 UTC m=+787.691042901" observedRunningTime="2026-01-28 17:29:02.661421006 +0000 UTC m=+788.829209236" watchObservedRunningTime="2026-01-28 17:29:02.666282857 +0000 UTC m=+788.834071087" Jan 28 17:29:04 crc kubenswrapper[5001]: I0128 17:29:04.834631 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:29:04 crc kubenswrapper[5001]: I0128 17:29:04.834994 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:29:07 crc kubenswrapper[5001]: I0128 17:29:07.276165 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xp89n" Jan 28 17:29:10 crc kubenswrapper[5001]: I0128 17:29:10.498930 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fmrf6" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.296916 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p"] Jan 28 17:29:23 crc kubenswrapper[5001]: E0128 17:29:23.297627 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="extract-utilities" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.297641 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="extract-utilities" Jan 28 17:29:23 crc kubenswrapper[5001]: E0128 17:29:23.297655 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="extract-content" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.297663 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="extract-content" Jan 28 17:29:23 crc kubenswrapper[5001]: E0128 17:29:23.297675 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="registry-server" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.297685 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="registry-server" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.297795 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="d733ba10-ef90-4ff7-b454-0dfce6dcd010" containerName="registry-server" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.298519 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.300124 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.307935 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p"] Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.367328 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.367411 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.367455 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsfk\" (UniqueName: \"kubernetes.io/projected/cda69916-8545-4983-b874-78620c94abbc-kube-api-access-cwsfk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.468281 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.468334 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.468362 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwsfk\" (UniqueName: \"kubernetes.io/projected/cda69916-8545-4983-b874-78620c94abbc-kube-api-access-cwsfk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.468801 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.468904 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.488087 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwsfk\" (UniqueName: \"kubernetes.io/projected/cda69916-8545-4983-b874-78620c94abbc-kube-api-access-cwsfk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:23 crc kubenswrapper[5001]: I0128 17:29:23.621458 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:24 crc kubenswrapper[5001]: I0128 17:29:24.060591 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p"] Jan 28 17:29:24 crc kubenswrapper[5001]: I0128 17:29:24.796041 5001 generic.go:334] "Generic (PLEG): container finished" podID="cda69916-8545-4983-b874-78620c94abbc" containerID="cdd94af3627d7b7533c51bd1031266eba257ea54fe055928c1e0094b1ba7a7ef" exitCode=0 Jan 28 17:29:24 crc kubenswrapper[5001]: I0128 17:29:24.797442 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" event={"ID":"cda69916-8545-4983-b874-78620c94abbc","Type":"ContainerDied","Data":"cdd94af3627d7b7533c51bd1031266eba257ea54fe055928c1e0094b1ba7a7ef"} Jan 28 17:29:24 crc kubenswrapper[5001]: I0128 17:29:24.797476 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" event={"ID":"cda69916-8545-4983-b874-78620c94abbc","Type":"ContainerStarted","Data":"4e5095b86b756dfe14f057675840f428da722f9e4d56e3df95e77b5ec3e6817e"} Jan 28 17:29:25 crc kubenswrapper[5001]: I0128 17:29:25.729851 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4xbj9" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerName="console" containerID="cri-o://835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b" gracePeriod=15 Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.145084 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xbj9_a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9/console/0.log" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.145421 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303249 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-config\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303301 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-serving-cert\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303339 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-service-ca\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303429 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjh6j\" (UniqueName: \"kubernetes.io/projected/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-kube-api-access-gjh6j\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303471 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-oauth-serving-cert\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303500 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-trusted-ca-bundle\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.303563 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-oauth-config\") pod \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\" (UID: \"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9\") " Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.304330 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-config" (OuterVolumeSpecName: "console-config") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.304345 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.304389 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.304416 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-service-ca" (OuterVolumeSpecName: "service-ca") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.309729 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-kube-api-access-gjh6j" (OuterVolumeSpecName: "kube-api-access-gjh6j") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "kube-api-access-gjh6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.310177 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.310559 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" (UID: "a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404460 5001 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404500 5001 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404515 5001 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404527 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjh6j\" (UniqueName: \"kubernetes.io/projected/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-kube-api-access-gjh6j\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404539 5001 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404548 5001 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.404558 5001 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.809325 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" event={"ID":"cda69916-8545-4983-b874-78620c94abbc","Type":"ContainerStarted","Data":"7784b56feb2282fd90c096b9a16e7c000c75db4223c95a965a23034579e33d92"} Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.812561 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xbj9_a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9/console/0.log" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.812647 5001 generic.go:334] "Generic (PLEG): container finished" podID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerID="835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b" exitCode=2 Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.812755 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xbj9" event={"ID":"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9","Type":"ContainerDied","Data":"835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b"} Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.812818 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xbj9" event={"ID":"a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9","Type":"ContainerDied","Data":"741e8d582f727cbdd5ad9c040945f442839c06fdcc256d9a39e02b5a16e19725"} Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.812836 5001 scope.go:117] "RemoveContainer" containerID="835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.813056 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xbj9" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.850309 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xbj9"] Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.853880 5001 scope.go:117] "RemoveContainer" containerID="835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.854563 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4xbj9"] Jan 28 17:29:26 crc kubenswrapper[5001]: E0128 17:29:26.854813 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b\": container with ID starting with 835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b not found: ID does not exist" containerID="835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b" Jan 28 17:29:26 crc kubenswrapper[5001]: I0128 17:29:26.854882 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b"} err="failed to get container status \"835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b\": rpc error: code = NotFound desc = could not find container \"835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b\": container with ID starting with 835010b7bad246883effd64dc7af97b1f25791df294a81a5d96ed02802df7b1b not found: ID does not exist" Jan 28 17:29:27 crc kubenswrapper[5001]: I0128 17:29:27.823128 5001 generic.go:334] "Generic (PLEG): container finished" podID="cda69916-8545-4983-b874-78620c94abbc" containerID="7784b56feb2282fd90c096b9a16e7c000c75db4223c95a965a23034579e33d92" exitCode=0 Jan 28 17:29:27 crc kubenswrapper[5001]: I0128 17:29:27.823210 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" event={"ID":"cda69916-8545-4983-b874-78620c94abbc","Type":"ContainerDied","Data":"7784b56feb2282fd90c096b9a16e7c000c75db4223c95a965a23034579e33d92"} Jan 28 17:29:28 crc kubenswrapper[5001]: I0128 17:29:28.605939 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" path="/var/lib/kubelet/pods/a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9/volumes" Jan 28 17:29:28 crc kubenswrapper[5001]: I0128 17:29:28.837638 5001 generic.go:334] "Generic (PLEG): container finished" podID="cda69916-8545-4983-b874-78620c94abbc" containerID="536547ff546faa1666c2181cee21954927ae7bac34a4954527f27fd29f927f3a" exitCode=0 Jan 28 17:29:28 crc kubenswrapper[5001]: I0128 17:29:28.837707 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" event={"ID":"cda69916-8545-4983-b874-78620c94abbc","Type":"ContainerDied","Data":"536547ff546faa1666c2181cee21954927ae7bac34a4954527f27fd29f927f3a"} Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.081246 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.257464 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-bundle\") pod \"cda69916-8545-4983-b874-78620c94abbc\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.257547 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwsfk\" (UniqueName: \"kubernetes.io/projected/cda69916-8545-4983-b874-78620c94abbc-kube-api-access-cwsfk\") pod \"cda69916-8545-4983-b874-78620c94abbc\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.257564 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-util\") pod \"cda69916-8545-4983-b874-78620c94abbc\" (UID: \"cda69916-8545-4983-b874-78620c94abbc\") " Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.258530 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-bundle" (OuterVolumeSpecName: "bundle") pod "cda69916-8545-4983-b874-78620c94abbc" (UID: "cda69916-8545-4983-b874-78620c94abbc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.264209 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda69916-8545-4983-b874-78620c94abbc-kube-api-access-cwsfk" (OuterVolumeSpecName: "kube-api-access-cwsfk") pod "cda69916-8545-4983-b874-78620c94abbc" (UID: "cda69916-8545-4983-b874-78620c94abbc"). InnerVolumeSpecName "kube-api-access-cwsfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.267500 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-util" (OuterVolumeSpecName: "util") pod "cda69916-8545-4983-b874-78620c94abbc" (UID: "cda69916-8545-4983-b874-78620c94abbc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.358522 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwsfk\" (UniqueName: \"kubernetes.io/projected/cda69916-8545-4983-b874-78620c94abbc-kube-api-access-cwsfk\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.358565 5001 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-util\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.358574 5001 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cda69916-8545-4983-b874-78620c94abbc-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.853697 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" event={"ID":"cda69916-8545-4983-b874-78620c94abbc","Type":"ContainerDied","Data":"4e5095b86b756dfe14f057675840f428da722f9e4d56e3df95e77b5ec3e6817e"} Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.853736 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5095b86b756dfe14f057675840f428da722f9e4d56e3df95e77b5ec3e6817e" Jan 28 17:29:30 crc kubenswrapper[5001]: I0128 17:29:30.853808 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p" Jan 28 17:29:34 crc kubenswrapper[5001]: I0128 17:29:34.834441 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:29:34 crc kubenswrapper[5001]: I0128 17:29:34.834899 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.512657 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-847cf68876-29g2k"] Jan 28 17:29:38 crc kubenswrapper[5001]: E0128 17:29:38.513219 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="util" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513232 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="util" Jan 28 17:29:38 crc kubenswrapper[5001]: E0128 17:29:38.513242 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="extract" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513251 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="extract" Jan 28 17:29:38 crc kubenswrapper[5001]: E0128 17:29:38.513263 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="pull" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513270 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="pull" Jan 28 17:29:38 crc kubenswrapper[5001]: E0128 17:29:38.513288 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerName="console" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513295 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerName="console" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513411 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fdac41-2a21-4780-98bc-8b9f6ebd0cf9" containerName="console" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513426 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda69916-8545-4983-b874-78620c94abbc" containerName="extract" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.513806 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.515710 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-kqsq8" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.515905 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.515948 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.516542 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.517266 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.543391 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-847cf68876-29g2k"] Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.656841 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzx66\" (UniqueName: \"kubernetes.io/projected/92f622d5-900f-4c24-b7d2-dfea8ccae720-kube-api-access-lzx66\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.657204 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92f622d5-900f-4c24-b7d2-dfea8ccae720-apiservice-cert\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.657317 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92f622d5-900f-4c24-b7d2-dfea8ccae720-webhook-cert\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.758823 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92f622d5-900f-4c24-b7d2-dfea8ccae720-apiservice-cert\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.758871 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92f622d5-900f-4c24-b7d2-dfea8ccae720-webhook-cert\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.758951 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzx66\" (UniqueName: \"kubernetes.io/projected/92f622d5-900f-4c24-b7d2-dfea8ccae720-kube-api-access-lzx66\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.766958 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92f622d5-900f-4c24-b7d2-dfea8ccae720-apiservice-cert\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.767716 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92f622d5-900f-4c24-b7d2-dfea8ccae720-webhook-cert\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.767951 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6"] Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.768584 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.770921 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.771068 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.771397 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-csfbg" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.784693 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6"] Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.795310 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzx66\" (UniqueName: \"kubernetes.io/projected/92f622d5-900f-4c24-b7d2-dfea8ccae720-kube-api-access-lzx66\") pod \"metallb-operator-controller-manager-847cf68876-29g2k\" (UID: \"92f622d5-900f-4c24-b7d2-dfea8ccae720\") " pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.834662 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.860818 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-webhook-cert\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.860869 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-apiservice-cert\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.860930 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmfw\" (UniqueName: \"kubernetes.io/projected/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-kube-api-access-nvmfw\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.962652 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-webhook-cert\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.963243 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-apiservice-cert\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.963423 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmfw\" (UniqueName: \"kubernetes.io/projected/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-kube-api-access-nvmfw\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.972254 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-webhook-cert\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.991484 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmfw\" (UniqueName: \"kubernetes.io/projected/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-kube-api-access-nvmfw\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:38 crc kubenswrapper[5001]: I0128 17:29:38.992398 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19c14ea1-ac12-4a0c-a10c-e65476d4aa41-apiservice-cert\") pod \"metallb-operator-webhook-server-86cf947cc7-rx2x6\" (UID: \"19c14ea1-ac12-4a0c-a10c-e65476d4aa41\") " pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:39 crc kubenswrapper[5001]: I0128 17:29:39.055581 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-847cf68876-29g2k"] Jan 28 17:29:39 crc kubenswrapper[5001]: I0128 17:29:39.142406 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:39 crc kubenswrapper[5001]: I0128 17:29:39.490010 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6"] Jan 28 17:29:39 crc kubenswrapper[5001]: W0128 17:29:39.505941 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19c14ea1_ac12_4a0c_a10c_e65476d4aa41.slice/crio-1e6801d864a64e426bc8390938158351c5332177865331c31872dae8d7ecc5cf WatchSource:0}: Error finding container 1e6801d864a64e426bc8390938158351c5332177865331c31872dae8d7ecc5cf: Status 404 returned error can't find the container with id 1e6801d864a64e426bc8390938158351c5332177865331c31872dae8d7ecc5cf Jan 28 17:29:39 crc kubenswrapper[5001]: I0128 17:29:39.902226 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" event={"ID":"92f622d5-900f-4c24-b7d2-dfea8ccae720","Type":"ContainerStarted","Data":"10b231efde27552cd0261092317f411ea0e3e223de4d90dae342c51fcea1a76c"} Jan 28 17:29:39 crc kubenswrapper[5001]: I0128 17:29:39.903386 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" event={"ID":"19c14ea1-ac12-4a0c-a10c-e65476d4aa41","Type":"ContainerStarted","Data":"1e6801d864a64e426bc8390938158351c5332177865331c31872dae8d7ecc5cf"} Jan 28 17:29:44 crc kubenswrapper[5001]: I0128 17:29:44.942606 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" event={"ID":"92f622d5-900f-4c24-b7d2-dfea8ccae720","Type":"ContainerStarted","Data":"1d3da377821375818641caa11356b329da6a29e666e9a76bb2002c6e2e3b5dc3"} Jan 28 17:29:44 crc kubenswrapper[5001]: I0128 17:29:44.943187 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:29:44 crc kubenswrapper[5001]: I0128 17:29:44.944344 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" event={"ID":"19c14ea1-ac12-4a0c-a10c-e65476d4aa41","Type":"ContainerStarted","Data":"eb7e06ebf7f4bdd41d59945a61cd0e9339e9b70db90eaf5a00327a2e61b97042"} Jan 28 17:29:44 crc kubenswrapper[5001]: I0128 17:29:44.944469 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:29:44 crc kubenswrapper[5001]: I0128 17:29:44.990546 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" podStartSLOduration=2.301045522 podStartE2EDuration="6.990528776s" podCreationTimestamp="2026-01-28 17:29:38 +0000 UTC" firstStartedPulling="2026-01-28 17:29:39.50887416 +0000 UTC m=+825.676662390" lastFinishedPulling="2026-01-28 17:29:44.198357414 +0000 UTC m=+830.366145644" observedRunningTime="2026-01-28 17:29:44.986916631 +0000 UTC m=+831.154704861" watchObservedRunningTime="2026-01-28 17:29:44.990528776 +0000 UTC m=+831.158317006" Jan 28 17:29:44 crc kubenswrapper[5001]: I0128 17:29:44.995928 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" podStartSLOduration=1.934206424 podStartE2EDuration="6.995913562s" podCreationTimestamp="2026-01-28 17:29:38 +0000 UTC" firstStartedPulling="2026-01-28 17:29:39.067659772 +0000 UTC m=+825.235448002" lastFinishedPulling="2026-01-28 17:29:44.12936691 +0000 UTC m=+830.297155140" observedRunningTime="2026-01-28 17:29:44.968072293 +0000 UTC m=+831.135860543" watchObservedRunningTime="2026-01-28 17:29:44.995913562 +0000 UTC m=+831.163701792" Jan 28 17:29:59 crc kubenswrapper[5001]: I0128 17:29:59.146307 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86cf947cc7-rx2x6" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.146608 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46"] Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.147271 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.148997 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.149362 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.155300 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46"] Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.259465 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc26b\" (UniqueName: \"kubernetes.io/projected/fd2a5ae1-f959-4db2-8041-53b2280c8daf-kube-api-access-vc26b\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.259510 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd2a5ae1-f959-4db2-8041-53b2280c8daf-secret-volume\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.259657 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd2a5ae1-f959-4db2-8041-53b2280c8daf-config-volume\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.360724 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd2a5ae1-f959-4db2-8041-53b2280c8daf-config-volume\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.361133 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc26b\" (UniqueName: \"kubernetes.io/projected/fd2a5ae1-f959-4db2-8041-53b2280c8daf-kube-api-access-vc26b\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.361257 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd2a5ae1-f959-4db2-8041-53b2280c8daf-secret-volume\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.361775 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd2a5ae1-f959-4db2-8041-53b2280c8daf-config-volume\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.367184 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd2a5ae1-f959-4db2-8041-53b2280c8daf-secret-volume\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.378521 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc26b\" (UniqueName: \"kubernetes.io/projected/fd2a5ae1-f959-4db2-8041-53b2280c8daf-kube-api-access-vc26b\") pod \"collect-profiles-29493690-4sl46\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.478256 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:00 crc kubenswrapper[5001]: I0128 17:30:00.886261 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46"] Jan 28 17:30:01 crc kubenswrapper[5001]: I0128 17:30:01.034306 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" event={"ID":"fd2a5ae1-f959-4db2-8041-53b2280c8daf","Type":"ContainerStarted","Data":"453f124e08b54f8f16c452b1684c0b9a238a2f936269b0f3b70df0b386e2339b"} Jan 28 17:30:01 crc kubenswrapper[5001]: I0128 17:30:01.034362 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" event={"ID":"fd2a5ae1-f959-4db2-8041-53b2280c8daf","Type":"ContainerStarted","Data":"5b8629df3586e49d177ad63e76a21bea89d1d900d15cbe2ee2b8201a5d79cf82"} Jan 28 17:30:01 crc kubenswrapper[5001]: I0128 17:30:01.049312 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" podStartSLOduration=1.049296987 podStartE2EDuration="1.049296987s" podCreationTimestamp="2026-01-28 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:30:01.047733711 +0000 UTC m=+847.215521941" watchObservedRunningTime="2026-01-28 17:30:01.049296987 +0000 UTC m=+847.217085217" Jan 28 17:30:02 crc kubenswrapper[5001]: I0128 17:30:02.040123 5001 generic.go:334] "Generic (PLEG): container finished" podID="fd2a5ae1-f959-4db2-8041-53b2280c8daf" containerID="453f124e08b54f8f16c452b1684c0b9a238a2f936269b0f3b70df0b386e2339b" exitCode=0 Jan 28 17:30:02 crc kubenswrapper[5001]: I0128 17:30:02.040168 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" event={"ID":"fd2a5ae1-f959-4db2-8041-53b2280c8daf","Type":"ContainerDied","Data":"453f124e08b54f8f16c452b1684c0b9a238a2f936269b0f3b70df0b386e2339b"} Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.273946 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.431273 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc26b\" (UniqueName: \"kubernetes.io/projected/fd2a5ae1-f959-4db2-8041-53b2280c8daf-kube-api-access-vc26b\") pod \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.431389 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd2a5ae1-f959-4db2-8041-53b2280c8daf-secret-volume\") pod \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.431452 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd2a5ae1-f959-4db2-8041-53b2280c8daf-config-volume\") pod \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\" (UID: \"fd2a5ae1-f959-4db2-8041-53b2280c8daf\") " Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.432107 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd2a5ae1-f959-4db2-8041-53b2280c8daf-config-volume" (OuterVolumeSpecName: "config-volume") pod "fd2a5ae1-f959-4db2-8041-53b2280c8daf" (UID: "fd2a5ae1-f959-4db2-8041-53b2280c8daf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.435802 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2a5ae1-f959-4db2-8041-53b2280c8daf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fd2a5ae1-f959-4db2-8041-53b2280c8daf" (UID: "fd2a5ae1-f959-4db2-8041-53b2280c8daf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.436242 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd2a5ae1-f959-4db2-8041-53b2280c8daf-kube-api-access-vc26b" (OuterVolumeSpecName: "kube-api-access-vc26b") pod "fd2a5ae1-f959-4db2-8041-53b2280c8daf" (UID: "fd2a5ae1-f959-4db2-8041-53b2280c8daf"). InnerVolumeSpecName "kube-api-access-vc26b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.532413 5001 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd2a5ae1-f959-4db2-8041-53b2280c8daf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.532450 5001 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd2a5ae1-f959-4db2-8041-53b2280c8daf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:03 crc kubenswrapper[5001]: I0128 17:30:03.532459 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc26b\" (UniqueName: \"kubernetes.io/projected/fd2a5ae1-f959-4db2-8041-53b2280c8daf-kube-api-access-vc26b\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.050889 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" event={"ID":"fd2a5ae1-f959-4db2-8041-53b2280c8daf","Type":"ContainerDied","Data":"5b8629df3586e49d177ad63e76a21bea89d1d900d15cbe2ee2b8201a5d79cf82"} Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.051211 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8629df3586e49d177ad63e76a21bea89d1d900d15cbe2ee2b8201a5d79cf82" Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.050955 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-4sl46" Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.834099 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.834716 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.834775 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.835355 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcd1bb3eeb7df10e3aeb349c79594cb4ff827106a1c1b316d9b57fdb098e8eef"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:30:04 crc kubenswrapper[5001]: I0128 17:30:04.835414 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://bcd1bb3eeb7df10e3aeb349c79594cb4ff827106a1c1b316d9b57fdb098e8eef" gracePeriod=600 Jan 28 17:30:05 crc kubenswrapper[5001]: I0128 17:30:05.060302 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="bcd1bb3eeb7df10e3aeb349c79594cb4ff827106a1c1b316d9b57fdb098e8eef" exitCode=0 Jan 28 17:30:05 crc kubenswrapper[5001]: I0128 17:30:05.060364 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"bcd1bb3eeb7df10e3aeb349c79594cb4ff827106a1c1b316d9b57fdb098e8eef"} Jan 28 17:30:05 crc kubenswrapper[5001]: I0128 17:30:05.060728 5001 scope.go:117] "RemoveContainer" containerID="ae61fb070fc3dd351681c14090d43b7fd5a4e929a9d267dc18a26f6bd1033912" Jan 28 17:30:06 crc kubenswrapper[5001]: I0128 17:30:06.072736 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"64823f4f0cdb758673ce03dbcf6563ab253f05451ac978f41bb7afc046d56ae8"} Jan 28 17:30:18 crc kubenswrapper[5001]: I0128 17:30:18.838303 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-847cf68876-29g2k" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.550636 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9mqw4"] Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.550954 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2a5ae1-f959-4db2-8041-53b2280c8daf" containerName="collect-profiles" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.550995 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2a5ae1-f959-4db2-8041-53b2280c8daf" containerName="collect-profiles" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.551130 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2a5ae1-f959-4db2-8041-53b2280c8daf" containerName="collect-profiles" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.553023 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.555134 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-md6zv" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.555567 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.561300 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.562640 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl"] Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.563460 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.568371 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.580546 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl"] Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.644915 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics-certs\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.644956 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-sockets\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645010 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-reloader\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645047 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2899z\" (UniqueName: \"kubernetes.io/projected/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-kube-api-access-2899z\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645187 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c196e172-3c33-4317-82c5-2dfbb916f6c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645317 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645386 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chl4\" (UniqueName: \"kubernetes.io/projected/c196e172-3c33-4317-82c5-2dfbb916f6c4-kube-api-access-6chl4\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645420 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-startup\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.645440 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-conf\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.672402 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fmxh8"] Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.673449 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.678040 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.678244 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-55zq7" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.678439 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.684533 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.688489 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-422s5"] Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.689350 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.691173 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.700650 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-422s5"] Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746429 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746480 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chl4\" (UniqueName: \"kubernetes.io/projected/c196e172-3c33-4317-82c5-2dfbb916f6c4-kube-api-access-6chl4\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746500 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-startup\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746515 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-conf\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746536 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-metallb-excludel2\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746564 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746581 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtflm\" (UniqueName: \"kubernetes.io/projected/c93402b1-0843-4fac-980f-172929e0cb5e-kube-api-access-gtflm\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746612 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics-certs\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746628 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-sockets\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746643 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-cert\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.746717 5001 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.746766 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics-certs podName:d4307f53-3a2c-4fb5-8c0b-395eaf4582bb nodeName:}" failed. No retries permitted until 2026-01-28 17:30:20.246751477 +0000 UTC m=+866.414539707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics-certs") pod "frr-k8s-9mqw4" (UID: "d4307f53-3a2c-4fb5-8c0b-395eaf4582bb") : secret "frr-k8s-certs-secret" not found Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746800 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-reloader\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746827 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcq9v\" (UniqueName: \"kubernetes.io/projected/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-kube-api-access-bcq9v\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746853 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2899z\" (UniqueName: \"kubernetes.io/projected/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-kube-api-access-2899z\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746867 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-metrics-certs\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746889 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-metrics-certs\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746907 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c196e172-3c33-4317-82c5-2dfbb916f6c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.746928 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.746945 5001 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.747017 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c196e172-3c33-4317-82c5-2dfbb916f6c4-cert podName:c196e172-3c33-4317-82c5-2dfbb916f6c4 nodeName:}" failed. No retries permitted until 2026-01-28 17:30:20.247007124 +0000 UTC m=+866.414795354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c196e172-3c33-4317-82c5-2dfbb916f6c4-cert") pod "frr-k8s-webhook-server-7df86c4f6c-92lkl" (UID: "c196e172-3c33-4317-82c5-2dfbb916f6c4") : secret "frr-k8s-webhook-server-cert" not found Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.747046 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-conf\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.747542 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-reloader\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.747577 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-startup\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.747645 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-frr-sockets\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.766921 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2899z\" (UniqueName: \"kubernetes.io/projected/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-kube-api-access-2899z\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.783956 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chl4\" (UniqueName: \"kubernetes.io/projected/c196e172-3c33-4317-82c5-2dfbb916f6c4-kube-api-access-6chl4\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.847951 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtflm\" (UniqueName: \"kubernetes.io/projected/c93402b1-0843-4fac-980f-172929e0cb5e-kube-api-access-gtflm\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.848070 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-cert\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.848115 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcq9v\" (UniqueName: \"kubernetes.io/projected/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-kube-api-access-bcq9v\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.848150 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-metrics-certs\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.848178 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-metrics-certs\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.848226 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-metallb-excludel2\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.848245 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.848312 5001 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.848375 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-metrics-certs podName:c93402b1-0843-4fac-980f-172929e0cb5e nodeName:}" failed. No retries permitted until 2026-01-28 17:30:20.348358758 +0000 UTC m=+866.516146988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-metrics-certs") pod "controller-6968d8fdc4-422s5" (UID: "c93402b1-0843-4fac-980f-172929e0cb5e") : secret "controller-certs-secret" not found Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.848375 5001 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 17:30:19 crc kubenswrapper[5001]: E0128 17:30:19.848407 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist podName:4c2138ae-b9a4-4c2b-8049-ee00845be4d7 nodeName:}" failed. No retries permitted until 2026-01-28 17:30:20.34839824 +0000 UTC m=+866.516186470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist") pod "speaker-fmxh8" (UID: "4c2138ae-b9a4-4c2b-8049-ee00845be4d7") : secret "metallb-memberlist" not found Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.849000 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-metallb-excludel2\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.850532 5001 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.854507 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-metrics-certs\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.864531 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-cert\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.876681 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcq9v\" (UniqueName: \"kubernetes.io/projected/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-kube-api-access-bcq9v\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:19 crc kubenswrapper[5001]: I0128 17:30:19.882750 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtflm\" (UniqueName: \"kubernetes.io/projected/c93402b1-0843-4fac-980f-172929e0cb5e-kube-api-access-gtflm\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.253436 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics-certs\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.253833 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c196e172-3c33-4317-82c5-2dfbb916f6c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.256584 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4307f53-3a2c-4fb5-8c0b-395eaf4582bb-metrics-certs\") pod \"frr-k8s-9mqw4\" (UID: \"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb\") " pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.257105 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c196e172-3c33-4317-82c5-2dfbb916f6c4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-92lkl\" (UID: \"c196e172-3c33-4317-82c5-2dfbb916f6c4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.355458 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-metrics-certs\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.355539 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:20 crc kubenswrapper[5001]: E0128 17:30:20.355719 5001 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 17:30:20 crc kubenswrapper[5001]: E0128 17:30:20.355787 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist podName:4c2138ae-b9a4-4c2b-8049-ee00845be4d7 nodeName:}" failed. No retries permitted until 2026-01-28 17:30:21.355767599 +0000 UTC m=+867.523555839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist") pod "speaker-fmxh8" (UID: "4c2138ae-b9a4-4c2b-8049-ee00845be4d7") : secret "metallb-memberlist" not found Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.359646 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c93402b1-0843-4fac-980f-172929e0cb5e-metrics-certs\") pod \"controller-6968d8fdc4-422s5\" (UID: \"c93402b1-0843-4fac-980f-172929e0cb5e\") " pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.481417 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.492019 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.603491 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.792355 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-422s5"] Jan 28 17:30:20 crc kubenswrapper[5001]: W0128 17:30:20.795934 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc93402b1_0843_4fac_980f_172929e0cb5e.slice/crio-16d9c03a13a136cc80e27d98072e295ce3caf4a232c688b36535b8bbd216cb85 WatchSource:0}: Error finding container 16d9c03a13a136cc80e27d98072e295ce3caf4a232c688b36535b8bbd216cb85: Status 404 returned error can't find the container with id 16d9c03a13a136cc80e27d98072e295ce3caf4a232c688b36535b8bbd216cb85 Jan 28 17:30:20 crc kubenswrapper[5001]: I0128 17:30:20.896146 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl"] Jan 28 17:30:20 crc kubenswrapper[5001]: W0128 17:30:20.907232 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc196e172_3c33_4317_82c5_2dfbb916f6c4.slice/crio-66d6e1b92553c45ab7bb7d830b9741f8e64646ff19bf996454327aaaa11bb0c5 WatchSource:0}: Error finding container 66d6e1b92553c45ab7bb7d830b9741f8e64646ff19bf996454327aaaa11bb0c5: Status 404 returned error can't find the container with id 66d6e1b92553c45ab7bb7d830b9741f8e64646ff19bf996454327aaaa11bb0c5 Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.152361 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-422s5" event={"ID":"c93402b1-0843-4fac-980f-172929e0cb5e","Type":"ContainerStarted","Data":"54b717eec505f5b467b5e43e499b0f2395c68972ae4d656d0795668f5ee980e9"} Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.152720 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-422s5" event={"ID":"c93402b1-0843-4fac-980f-172929e0cb5e","Type":"ContainerStarted","Data":"880971dfd25dd63d3feb0d12420a86df084fb7eaaab2b2152353cff5932f9219"} Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.152738 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-422s5" event={"ID":"c93402b1-0843-4fac-980f-172929e0cb5e","Type":"ContainerStarted","Data":"16d9c03a13a136cc80e27d98072e295ce3caf4a232c688b36535b8bbd216cb85"} Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.152755 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.153925 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" event={"ID":"c196e172-3c33-4317-82c5-2dfbb916f6c4","Type":"ContainerStarted","Data":"66d6e1b92553c45ab7bb7d830b9741f8e64646ff19bf996454327aaaa11bb0c5"} Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.154943 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"8fda621afb0d364155e8bd50d02d14f4eac61404f030acb8733a7b41caeba9d4"} Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.168292 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-422s5" podStartSLOduration=2.168278193 podStartE2EDuration="2.168278193s" podCreationTimestamp="2026-01-28 17:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:30:21.165928205 +0000 UTC m=+867.333716435" watchObservedRunningTime="2026-01-28 17:30:21.168278193 +0000 UTC m=+867.336066423" Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.367059 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.372658 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4c2138ae-b9a4-4c2b-8049-ee00845be4d7-memberlist\") pod \"speaker-fmxh8\" (UID: \"4c2138ae-b9a4-4c2b-8049-ee00845be4d7\") " pod="metallb-system/speaker-fmxh8" Jan 28 17:30:21 crc kubenswrapper[5001]: I0128 17:30:21.489754 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fmxh8" Jan 28 17:30:21 crc kubenswrapper[5001]: W0128 17:30:21.511680 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c2138ae_b9a4_4c2b_8049_ee00845be4d7.slice/crio-7e1c46e4b40ca2586458af2b06522afc72bd9339db1fafd1090fa51ebdab3977 WatchSource:0}: Error finding container 7e1c46e4b40ca2586458af2b06522afc72bd9339db1fafd1090fa51ebdab3977: Status 404 returned error can't find the container with id 7e1c46e4b40ca2586458af2b06522afc72bd9339db1fafd1090fa51ebdab3977 Jan 28 17:30:22 crc kubenswrapper[5001]: I0128 17:30:22.162341 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fmxh8" event={"ID":"4c2138ae-b9a4-4c2b-8049-ee00845be4d7","Type":"ContainerStarted","Data":"8c441c27a6ef69d7f2d3e949236864dd123952f73cd39fcbd7de552f30e15baa"} Jan 28 17:30:22 crc kubenswrapper[5001]: I0128 17:30:22.162666 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fmxh8" event={"ID":"4c2138ae-b9a4-4c2b-8049-ee00845be4d7","Type":"ContainerStarted","Data":"fa7b8a02689b9815f61304ed439cb550f8accfe17e3e0b18f657344cf871076b"} Jan 28 17:30:22 crc kubenswrapper[5001]: I0128 17:30:22.162695 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fmxh8" event={"ID":"4c2138ae-b9a4-4c2b-8049-ee00845be4d7","Type":"ContainerStarted","Data":"7e1c46e4b40ca2586458af2b06522afc72bd9339db1fafd1090fa51ebdab3977"} Jan 28 17:30:22 crc kubenswrapper[5001]: I0128 17:30:22.163302 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fmxh8" Jan 28 17:30:22 crc kubenswrapper[5001]: I0128 17:30:22.182921 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fmxh8" podStartSLOduration=3.182904139 podStartE2EDuration="3.182904139s" podCreationTimestamp="2026-01-28 17:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:30:22.176678818 +0000 UTC m=+868.344467048" watchObservedRunningTime="2026-01-28 17:30:22.182904139 +0000 UTC m=+868.350692359" Jan 28 17:30:29 crc kubenswrapper[5001]: I0128 17:30:29.204729 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" event={"ID":"c196e172-3c33-4317-82c5-2dfbb916f6c4","Type":"ContainerStarted","Data":"4f925f7aa84c7cab7a8a78f04f51b8120950b5f4cb7475313de3fd1990b5614f"} Jan 28 17:30:29 crc kubenswrapper[5001]: I0128 17:30:29.205766 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:29 crc kubenswrapper[5001]: I0128 17:30:29.207578 5001 generic.go:334] "Generic (PLEG): container finished" podID="d4307f53-3a2c-4fb5-8c0b-395eaf4582bb" containerID="d2c1e96b1613618143b7fe3c497d7def31427618276d4f3133d93d069b69598d" exitCode=0 Jan 28 17:30:29 crc kubenswrapper[5001]: I0128 17:30:29.207635 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerDied","Data":"d2c1e96b1613618143b7fe3c497d7def31427618276d4f3133d93d069b69598d"} Jan 28 17:30:29 crc kubenswrapper[5001]: I0128 17:30:29.224485 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" podStartSLOduration=2.890022851 podStartE2EDuration="10.224461994s" podCreationTimestamp="2026-01-28 17:30:19 +0000 UTC" firstStartedPulling="2026-01-28 17:30:20.90967309 +0000 UTC m=+867.077461320" lastFinishedPulling="2026-01-28 17:30:28.244112233 +0000 UTC m=+874.411900463" observedRunningTime="2026-01-28 17:30:29.219266063 +0000 UTC m=+875.387054303" watchObservedRunningTime="2026-01-28 17:30:29.224461994 +0000 UTC m=+875.392250224" Jan 28 17:30:30 crc kubenswrapper[5001]: I0128 17:30:30.213407 5001 generic.go:334] "Generic (PLEG): container finished" podID="d4307f53-3a2c-4fb5-8c0b-395eaf4582bb" containerID="93f7ce35bfc9588dfb007644ffc8f0319f2fe8ae83e752d94474b3608e82ea0d" exitCode=0 Jan 28 17:30:30 crc kubenswrapper[5001]: I0128 17:30:30.213507 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerDied","Data":"93f7ce35bfc9588dfb007644ffc8f0319f2fe8ae83e752d94474b3608e82ea0d"} Jan 28 17:30:30 crc kubenswrapper[5001]: I0128 17:30:30.608403 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-422s5" Jan 28 17:30:31 crc kubenswrapper[5001]: I0128 17:30:31.222253 5001 generic.go:334] "Generic (PLEG): container finished" podID="d4307f53-3a2c-4fb5-8c0b-395eaf4582bb" containerID="9f89229d8916a65bf262bfa3397c00598e84d1f180d73501cb1cb87f3d3e3351" exitCode=0 Jan 28 17:30:31 crc kubenswrapper[5001]: I0128 17:30:31.222346 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerDied","Data":"9f89229d8916a65bf262bfa3397c00598e84d1f180d73501cb1cb87f3d3e3351"} Jan 28 17:30:31 crc kubenswrapper[5001]: I0128 17:30:31.492905 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fmxh8" Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.234360 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"54eb06acd62fa0952e15c637994f0ed4e4162632408f58c180b2bc089c35d9fc"} Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.234674 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"fd2be615f66a1a060420e891fe4788b7f63d3128bda8d94f84413d8cf93cb4e3"} Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.234692 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"0aeceec70cefa56f7cb4be875468ea2ba3d7d3bbc611829d549f025e5fa44c45"} Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.234708 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"1e8086c6230621390d6afe14ec1caec108179161d2141383782e32a3786764d2"} Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.234723 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"4503e976f7a364a34c5fbc8672bbae46cff0369c145c17728080cfc3bdc462db"} Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.234735 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9mqw4" event={"ID":"d4307f53-3a2c-4fb5-8c0b-395eaf4582bb","Type":"ContainerStarted","Data":"adb4a3cea0a09b89514757c3310a444ee84fe768e886cfab7eebdf286bd8b5fe"} Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.235174 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.259841 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9mqw4" podStartSLOduration=5.732748675 podStartE2EDuration="13.259825584s" podCreationTimestamp="2026-01-28 17:30:19 +0000 UTC" firstStartedPulling="2026-01-28 17:30:20.746017596 +0000 UTC m=+866.913805826" lastFinishedPulling="2026-01-28 17:30:28.273094505 +0000 UTC m=+874.440882735" observedRunningTime="2026-01-28 17:30:32.25832843 +0000 UTC m=+878.426116670" watchObservedRunningTime="2026-01-28 17:30:32.259825584 +0000 UTC m=+878.427613814" Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.834405 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr"] Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.835841 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.837431 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 17:30:32 crc kubenswrapper[5001]: I0128 17:30:32.856291 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr"] Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.014718 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rtk\" (UniqueName: \"kubernetes.io/projected/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-kube-api-access-w5rtk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.014772 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.014819 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.115943 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5rtk\" (UniqueName: \"kubernetes.io/projected/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-kube-api-access-w5rtk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.116051 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.116119 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.116535 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.116633 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.134956 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5rtk\" (UniqueName: \"kubernetes.io/projected/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-kube-api-access-w5rtk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.154520 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:33 crc kubenswrapper[5001]: I0128 17:30:33.545719 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr"] Jan 28 17:30:34 crc kubenswrapper[5001]: I0128 17:30:34.257808 5001 generic.go:334] "Generic (PLEG): container finished" podID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerID="9898c76f7c966aefbc31fbb3efc342adfa62d493da40c9df5ff5270e4beb01d9" exitCode=0 Jan 28 17:30:34 crc kubenswrapper[5001]: I0128 17:30:34.257848 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" event={"ID":"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8","Type":"ContainerDied","Data":"9898c76f7c966aefbc31fbb3efc342adfa62d493da40c9df5ff5270e4beb01d9"} Jan 28 17:30:34 crc kubenswrapper[5001]: I0128 17:30:34.258177 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" event={"ID":"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8","Type":"ContainerStarted","Data":"4c58355f0ffcf013165819ebf6fbe55322a8210d58aa8551b2b2c2c20cad4e11"} Jan 28 17:30:35 crc kubenswrapper[5001]: I0128 17:30:35.482579 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:35 crc kubenswrapper[5001]: I0128 17:30:35.545629 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:38 crc kubenswrapper[5001]: I0128 17:30:38.287364 5001 generic.go:334] "Generic (PLEG): container finished" podID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerID="00c0be977f33812a1b597e62edcbe0b1c04b9c0a1fb0dcba4c691e26ccef671a" exitCode=0 Jan 28 17:30:38 crc kubenswrapper[5001]: I0128 17:30:38.287586 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" event={"ID":"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8","Type":"ContainerDied","Data":"00c0be977f33812a1b597e62edcbe0b1c04b9c0a1fb0dcba4c691e26ccef671a"} Jan 28 17:30:39 crc kubenswrapper[5001]: I0128 17:30:39.296264 5001 generic.go:334] "Generic (PLEG): container finished" podID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerID="c20c8a9b0d267281125909c4f061fe34e715215faa6822c568027d2067f1ae65" exitCode=0 Jan 28 17:30:39 crc kubenswrapper[5001]: I0128 17:30:39.296321 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" event={"ID":"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8","Type":"ContainerDied","Data":"c20c8a9b0d267281125909c4f061fe34e715215faa6822c568027d2067f1ae65"} Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.517656 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-92lkl" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.573101 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.711295 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rtk\" (UniqueName: \"kubernetes.io/projected/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-kube-api-access-w5rtk\") pod \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.711429 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-util\") pod \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.711489 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-bundle\") pod \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\" (UID: \"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8\") " Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.712896 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-bundle" (OuterVolumeSpecName: "bundle") pod "7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" (UID: "7540c5dd-c168-474a-9e79-e0fd9fa9f8e8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.717067 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-kube-api-access-w5rtk" (OuterVolumeSpecName: "kube-api-access-w5rtk") pod "7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" (UID: "7540c5dd-c168-474a-9e79-e0fd9fa9f8e8"). InnerVolumeSpecName "kube-api-access-w5rtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.722045 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-util" (OuterVolumeSpecName: "util") pod "7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" (UID: "7540c5dd-c168-474a-9e79-e0fd9fa9f8e8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.813427 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5rtk\" (UniqueName: \"kubernetes.io/projected/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-kube-api-access-w5rtk\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.813479 5001 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-util\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:40 crc kubenswrapper[5001]: I0128 17:30:40.813491 5001 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7540c5dd-c168-474a-9e79-e0fd9fa9f8e8-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:41 crc kubenswrapper[5001]: I0128 17:30:41.310043 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" event={"ID":"7540c5dd-c168-474a-9e79-e0fd9fa9f8e8","Type":"ContainerDied","Data":"4c58355f0ffcf013165819ebf6fbe55322a8210d58aa8551b2b2c2c20cad4e11"} Jan 28 17:30:41 crc kubenswrapper[5001]: I0128 17:30:41.310091 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c58355f0ffcf013165819ebf6fbe55322a8210d58aa8551b2b2c2c20cad4e11" Jan 28 17:30:41 crc kubenswrapper[5001]: I0128 17:30:41.310368 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.669809 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5"] Jan 28 17:30:45 crc kubenswrapper[5001]: E0128 17:30:45.670666 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="pull" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.670692 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="pull" Jan 28 17:30:45 crc kubenswrapper[5001]: E0128 17:30:45.670706 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="util" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.670713 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="util" Jan 28 17:30:45 crc kubenswrapper[5001]: E0128 17:30:45.670724 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="extract" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.670732 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="extract" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.670827 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7540c5dd-c168-474a-9e79-e0fd9fa9f8e8" containerName="extract" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.671428 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.673235 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.673470 5001 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-bfcqn" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.680611 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.688001 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5"] Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.782087 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3b28533-f660-4e61-83e2-bf1d7002f7d5-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-plvt5\" (UID: \"d3b28533-f660-4e61-83e2-bf1d7002f7d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.782290 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmsf\" (UniqueName: \"kubernetes.io/projected/d3b28533-f660-4e61-83e2-bf1d7002f7d5-kube-api-access-fcmsf\") pod \"cert-manager-operator-controller-manager-64cf6dff88-plvt5\" (UID: \"d3b28533-f660-4e61-83e2-bf1d7002f7d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.884226 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmsf\" (UniqueName: \"kubernetes.io/projected/d3b28533-f660-4e61-83e2-bf1d7002f7d5-kube-api-access-fcmsf\") pod \"cert-manager-operator-controller-manager-64cf6dff88-plvt5\" (UID: \"d3b28533-f660-4e61-83e2-bf1d7002f7d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.884320 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3b28533-f660-4e61-83e2-bf1d7002f7d5-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-plvt5\" (UID: \"d3b28533-f660-4e61-83e2-bf1d7002f7d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.885121 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3b28533-f660-4e61-83e2-bf1d7002f7d5-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-plvt5\" (UID: \"d3b28533-f660-4e61-83e2-bf1d7002f7d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:45 crc kubenswrapper[5001]: I0128 17:30:45.908625 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmsf\" (UniqueName: \"kubernetes.io/projected/d3b28533-f660-4e61-83e2-bf1d7002f7d5-kube-api-access-fcmsf\") pod \"cert-manager-operator-controller-manager-64cf6dff88-plvt5\" (UID: \"d3b28533-f660-4e61-83e2-bf1d7002f7d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:46 crc kubenswrapper[5001]: I0128 17:30:46.005700 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" Jan 28 17:30:46 crc kubenswrapper[5001]: I0128 17:30:46.297357 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5"] Jan 28 17:30:46 crc kubenswrapper[5001]: W0128 17:30:46.301896 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3b28533_f660_4e61_83e2_bf1d7002f7d5.slice/crio-db9985fecd86c3f4b38c6c68962e26134f98b8f02ba6154f57a8f781ee41159d WatchSource:0}: Error finding container db9985fecd86c3f4b38c6c68962e26134f98b8f02ba6154f57a8f781ee41159d: Status 404 returned error can't find the container with id db9985fecd86c3f4b38c6c68962e26134f98b8f02ba6154f57a8f781ee41159d Jan 28 17:30:46 crc kubenswrapper[5001]: I0128 17:30:46.342013 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" event={"ID":"d3b28533-f660-4e61-83e2-bf1d7002f7d5","Type":"ContainerStarted","Data":"db9985fecd86c3f4b38c6c68962e26134f98b8f02ba6154f57a8f781ee41159d"} Jan 28 17:30:50 crc kubenswrapper[5001]: I0128 17:30:50.484821 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9mqw4" Jan 28 17:30:53 crc kubenswrapper[5001]: I0128 17:30:53.388120 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" event={"ID":"d3b28533-f660-4e61-83e2-bf1d7002f7d5","Type":"ContainerStarted","Data":"c1a641dc366075c37bd76a40a6e94b8ec19b749f0d37d12b2199e488d41abe9f"} Jan 28 17:30:53 crc kubenswrapper[5001]: I0128 17:30:53.424279 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-plvt5" podStartSLOduration=2.077466351 podStartE2EDuration="8.424264752s" podCreationTimestamp="2026-01-28 17:30:45 +0000 UTC" firstStartedPulling="2026-01-28 17:30:46.305696621 +0000 UTC m=+892.473484851" lastFinishedPulling="2026-01-28 17:30:52.652494992 +0000 UTC m=+898.820283252" observedRunningTime="2026-01-28 17:30:53.422277304 +0000 UTC m=+899.590065554" watchObservedRunningTime="2026-01-28 17:30:53.424264752 +0000 UTC m=+899.592052982" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.425366 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wf68z"] Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.426571 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.428668 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.428730 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.428892 5001 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-95xq9" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.434901 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wf68z"] Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.546894 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rd6k\" (UniqueName: \"kubernetes.io/projected/267af68d-70ec-485f-bb8a-abd72b8a5323-kube-api-access-8rd6k\") pod \"cert-manager-cainjector-855d9ccff4-wf68z\" (UID: \"267af68d-70ec-485f-bb8a-abd72b8a5323\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.546933 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267af68d-70ec-485f-bb8a-abd72b8a5323-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wf68z\" (UID: \"267af68d-70ec-485f-bb8a-abd72b8a5323\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.648440 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rd6k\" (UniqueName: \"kubernetes.io/projected/267af68d-70ec-485f-bb8a-abd72b8a5323-kube-api-access-8rd6k\") pod \"cert-manager-cainjector-855d9ccff4-wf68z\" (UID: \"267af68d-70ec-485f-bb8a-abd72b8a5323\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.648485 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267af68d-70ec-485f-bb8a-abd72b8a5323-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wf68z\" (UID: \"267af68d-70ec-485f-bb8a-abd72b8a5323\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.665759 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267af68d-70ec-485f-bb8a-abd72b8a5323-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wf68z\" (UID: \"267af68d-70ec-485f-bb8a-abd72b8a5323\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.665834 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rd6k\" (UniqueName: \"kubernetes.io/projected/267af68d-70ec-485f-bb8a-abd72b8a5323-kube-api-access-8rd6k\") pod \"cert-manager-cainjector-855d9ccff4-wf68z\" (UID: \"267af68d-70ec-485f-bb8a-abd72b8a5323\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.740681 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" Jan 28 17:30:57 crc kubenswrapper[5001]: I0128 17:30:57.949013 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wf68z"] Jan 28 17:30:57 crc kubenswrapper[5001]: W0128 17:30:57.965894 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod267af68d_70ec_485f_bb8a_abd72b8a5323.slice/crio-9f94fe1775d8e734902929c186b11d1812bee3c6b608c71f7a0866d6b3d13081 WatchSource:0}: Error finding container 9f94fe1775d8e734902929c186b11d1812bee3c6b608c71f7a0866d6b3d13081: Status 404 returned error can't find the container with id 9f94fe1775d8e734902929c186b11d1812bee3c6b608c71f7a0866d6b3d13081 Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.417805 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" event={"ID":"267af68d-70ec-485f-bb8a-abd72b8a5323","Type":"ContainerStarted","Data":"9f94fe1775d8e734902929c186b11d1812bee3c6b608c71f7a0866d6b3d13081"} Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.636339 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-qr8rl"] Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.638318 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.641034 5001 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gz867" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.650509 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-qr8rl"] Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.762838 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xv9\" (UniqueName: \"kubernetes.io/projected/d2543c7d-8b33-4432-a05b-0e8d0b24a168-kube-api-access-t7xv9\") pod \"cert-manager-webhook-f4fb5df64-qr8rl\" (UID: \"d2543c7d-8b33-4432-a05b-0e8d0b24a168\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.763121 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2543c7d-8b33-4432-a05b-0e8d0b24a168-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-qr8rl\" (UID: \"d2543c7d-8b33-4432-a05b-0e8d0b24a168\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.864518 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2543c7d-8b33-4432-a05b-0e8d0b24a168-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-qr8rl\" (UID: \"d2543c7d-8b33-4432-a05b-0e8d0b24a168\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.864571 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xv9\" (UniqueName: \"kubernetes.io/projected/d2543c7d-8b33-4432-a05b-0e8d0b24a168-kube-api-access-t7xv9\") pod \"cert-manager-webhook-f4fb5df64-qr8rl\" (UID: \"d2543c7d-8b33-4432-a05b-0e8d0b24a168\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.882675 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2543c7d-8b33-4432-a05b-0e8d0b24a168-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-qr8rl\" (UID: \"d2543c7d-8b33-4432-a05b-0e8d0b24a168\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.884297 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xv9\" (UniqueName: \"kubernetes.io/projected/d2543c7d-8b33-4432-a05b-0e8d0b24a168-kube-api-access-t7xv9\") pod \"cert-manager-webhook-f4fb5df64-qr8rl\" (UID: \"d2543c7d-8b33-4432-a05b-0e8d0b24a168\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:58 crc kubenswrapper[5001]: I0128 17:30:58.961510 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:30:59 crc kubenswrapper[5001]: I0128 17:30:59.357066 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-qr8rl"] Jan 28 17:30:59 crc kubenswrapper[5001]: W0128 17:30:59.362515 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2543c7d_8b33_4432_a05b_0e8d0b24a168.slice/crio-5a39737f765c6bb16d2d7d0fef8ce84fb9042dbe59bc3bb13fa2504bfae52f51 WatchSource:0}: Error finding container 5a39737f765c6bb16d2d7d0fef8ce84fb9042dbe59bc3bb13fa2504bfae52f51: Status 404 returned error can't find the container with id 5a39737f765c6bb16d2d7d0fef8ce84fb9042dbe59bc3bb13fa2504bfae52f51 Jan 28 17:30:59 crc kubenswrapper[5001]: I0128 17:30:59.434521 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" event={"ID":"d2543c7d-8b33-4432-a05b-0e8d0b24a168","Type":"ContainerStarted","Data":"5a39737f765c6bb16d2d7d0fef8ce84fb9042dbe59bc3bb13fa2504bfae52f51"} Jan 28 17:31:05 crc kubenswrapper[5001]: I0128 17:31:05.478817 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" event={"ID":"d2543c7d-8b33-4432-a05b-0e8d0b24a168","Type":"ContainerStarted","Data":"1ad035f35da586b70c9b32f3da1eb9a68a054ac2c65d4ee85b115b79aa0ddbf7"} Jan 28 17:31:05 crc kubenswrapper[5001]: I0128 17:31:05.479272 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:31:05 crc kubenswrapper[5001]: I0128 17:31:05.481574 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" event={"ID":"267af68d-70ec-485f-bb8a-abd72b8a5323","Type":"ContainerStarted","Data":"c58d3b9d5ab81952c7a381223e771f5aaafbb91894e8f9d6fc1e210ffb4f3046"} Jan 28 17:31:05 crc kubenswrapper[5001]: I0128 17:31:05.493555 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" podStartSLOduration=1.881732237 podStartE2EDuration="7.493534407s" podCreationTimestamp="2026-01-28 17:30:58 +0000 UTC" firstStartedPulling="2026-01-28 17:30:59.364464901 +0000 UTC m=+905.532253121" lastFinishedPulling="2026-01-28 17:31:04.976267061 +0000 UTC m=+911.144055291" observedRunningTime="2026-01-28 17:31:05.492393774 +0000 UTC m=+911.660182004" watchObservedRunningTime="2026-01-28 17:31:05.493534407 +0000 UTC m=+911.661322637" Jan 28 17:31:05 crc kubenswrapper[5001]: I0128 17:31:05.512069 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wf68z" podStartSLOduration=1.511778086 podStartE2EDuration="8.512044303s" podCreationTimestamp="2026-01-28 17:30:57 +0000 UTC" firstStartedPulling="2026-01-28 17:30:57.968890788 +0000 UTC m=+904.136679018" lastFinishedPulling="2026-01-28 17:31:04.969157005 +0000 UTC m=+911.136945235" observedRunningTime="2026-01-28 17:31:05.509371666 +0000 UTC m=+911.677159896" watchObservedRunningTime="2026-01-28 17:31:05.512044303 +0000 UTC m=+911.679832533" Jan 28 17:31:11 crc kubenswrapper[5001]: I0128 17:31:11.932354 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g8w8w"] Jan 28 17:31:11 crc kubenswrapper[5001]: I0128 17:31:11.934102 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:11 crc kubenswrapper[5001]: I0128 17:31:11.942476 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8w8w"] Jan 28 17:31:11 crc kubenswrapper[5001]: I0128 17:31:11.990938 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-catalog-content\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:11 crc kubenswrapper[5001]: I0128 17:31:11.991102 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh76\" (UniqueName: \"kubernetes.io/projected/8a9885bc-111a-4e49-bf90-4df20f5a486d-kube-api-access-gvh76\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:11 crc kubenswrapper[5001]: I0128 17:31:11.991176 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-utilities\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.092564 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-catalog-content\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.092649 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvh76\" (UniqueName: \"kubernetes.io/projected/8a9885bc-111a-4e49-bf90-4df20f5a486d-kube-api-access-gvh76\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.092720 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-utilities\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.093302 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-utilities\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.093293 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-catalog-content\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.115145 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvh76\" (UniqueName: \"kubernetes.io/projected/8a9885bc-111a-4e49-bf90-4df20f5a486d-kube-api-access-gvh76\") pod \"certified-operators-g8w8w\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.308819 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:12 crc kubenswrapper[5001]: I0128 17:31:12.732852 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8w8w"] Jan 28 17:31:13 crc kubenswrapper[5001]: I0128 17:31:13.534581 5001 generic.go:334] "Generic (PLEG): container finished" podID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerID="b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f" exitCode=0 Jan 28 17:31:13 crc kubenswrapper[5001]: I0128 17:31:13.534657 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerDied","Data":"b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f"} Jan 28 17:31:13 crc kubenswrapper[5001]: I0128 17:31:13.534893 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerStarted","Data":"c7f80a04f61b5ce6876d599f27880b62192b6366d2795c3470f4dcdde53e42c3"} Jan 28 17:31:13 crc kubenswrapper[5001]: I0128 17:31:13.966151 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-qr8rl" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.408003 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-gcfmc"] Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.408763 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.414580 5001 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xv64w" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.430958 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-gcfmc"] Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.526523 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee24a09-8b61-427a-a338-4a96a4a47716-bound-sa-token\") pod \"cert-manager-86cb77c54b-gcfmc\" (UID: \"8ee24a09-8b61-427a-a338-4a96a4a47716\") " pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.526866 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm4rq\" (UniqueName: \"kubernetes.io/projected/8ee24a09-8b61-427a-a338-4a96a4a47716-kube-api-access-fm4rq\") pod \"cert-manager-86cb77c54b-gcfmc\" (UID: \"8ee24a09-8b61-427a-a338-4a96a4a47716\") " pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.541838 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerStarted","Data":"d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37"} Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.629331 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm4rq\" (UniqueName: \"kubernetes.io/projected/8ee24a09-8b61-427a-a338-4a96a4a47716-kube-api-access-fm4rq\") pod \"cert-manager-86cb77c54b-gcfmc\" (UID: \"8ee24a09-8b61-427a-a338-4a96a4a47716\") " pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.629565 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee24a09-8b61-427a-a338-4a96a4a47716-bound-sa-token\") pod \"cert-manager-86cb77c54b-gcfmc\" (UID: \"8ee24a09-8b61-427a-a338-4a96a4a47716\") " pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.649714 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee24a09-8b61-427a-a338-4a96a4a47716-bound-sa-token\") pod \"cert-manager-86cb77c54b-gcfmc\" (UID: \"8ee24a09-8b61-427a-a338-4a96a4a47716\") " pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.650737 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm4rq\" (UniqueName: \"kubernetes.io/projected/8ee24a09-8b61-427a-a338-4a96a4a47716-kube-api-access-fm4rq\") pod \"cert-manager-86cb77c54b-gcfmc\" (UID: \"8ee24a09-8b61-427a-a338-4a96a4a47716\") " pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:14 crc kubenswrapper[5001]: I0128 17:31:14.737827 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-gcfmc" Jan 28 17:31:15 crc kubenswrapper[5001]: I0128 17:31:15.117341 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-gcfmc"] Jan 28 17:31:15 crc kubenswrapper[5001]: W0128 17:31:15.118444 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee24a09_8b61_427a_a338_4a96a4a47716.slice/crio-0e5ac4d7f324743cf9f62f243e51c0460100954861fe54c9bdde7de9c9e4367a WatchSource:0}: Error finding container 0e5ac4d7f324743cf9f62f243e51c0460100954861fe54c9bdde7de9c9e4367a: Status 404 returned error can't find the container with id 0e5ac4d7f324743cf9f62f243e51c0460100954861fe54c9bdde7de9c9e4367a Jan 28 17:31:15 crc kubenswrapper[5001]: I0128 17:31:15.549118 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-gcfmc" event={"ID":"8ee24a09-8b61-427a-a338-4a96a4a47716","Type":"ContainerStarted","Data":"3082ab0d689af9d7e1bf286d0dbf4d30f86beb348d676f202fd64b5db2a84187"} Jan 28 17:31:15 crc kubenswrapper[5001]: I0128 17:31:15.549440 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-gcfmc" event={"ID":"8ee24a09-8b61-427a-a338-4a96a4a47716","Type":"ContainerStarted","Data":"0e5ac4d7f324743cf9f62f243e51c0460100954861fe54c9bdde7de9c9e4367a"} Jan 28 17:31:15 crc kubenswrapper[5001]: I0128 17:31:15.550891 5001 generic.go:334] "Generic (PLEG): container finished" podID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerID="d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37" exitCode=0 Jan 28 17:31:15 crc kubenswrapper[5001]: I0128 17:31:15.550918 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerDied","Data":"d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37"} Jan 28 17:31:15 crc kubenswrapper[5001]: I0128 17:31:15.565775 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-gcfmc" podStartSLOduration=1.565759467 podStartE2EDuration="1.565759467s" podCreationTimestamp="2026-01-28 17:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:31:15.561545055 +0000 UTC m=+921.729333285" watchObservedRunningTime="2026-01-28 17:31:15.565759467 +0000 UTC m=+921.733547697" Jan 28 17:31:16 crc kubenswrapper[5001]: I0128 17:31:16.559126 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerStarted","Data":"7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0"} Jan 28 17:31:16 crc kubenswrapper[5001]: I0128 17:31:16.579721 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g8w8w" podStartSLOduration=3.12616829 podStartE2EDuration="5.579702183s" podCreationTimestamp="2026-01-28 17:31:11 +0000 UTC" firstStartedPulling="2026-01-28 17:31:13.536579249 +0000 UTC m=+919.704367479" lastFinishedPulling="2026-01-28 17:31:15.990113132 +0000 UTC m=+922.157901372" observedRunningTime="2026-01-28 17:31:16.576251683 +0000 UTC m=+922.744039933" watchObservedRunningTime="2026-01-28 17:31:16.579702183 +0000 UTC m=+922.747490413" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.309187 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.310569 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.350332 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.633674 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.646091 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vdjm7"] Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.647566 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.664717 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdjm7"] Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.735484 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rhr\" (UniqueName: \"kubernetes.io/projected/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-kube-api-access-t2rhr\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.735603 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-catalog-content\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.735636 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-utilities\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.837276 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2rhr\" (UniqueName: \"kubernetes.io/projected/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-kube-api-access-t2rhr\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.837357 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-catalog-content\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.837382 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-utilities\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.837762 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-utilities\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.837794 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-catalog-content\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.858708 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2rhr\" (UniqueName: \"kubernetes.io/projected/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-kube-api-access-t2rhr\") pod \"redhat-marketplace-vdjm7\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:22 crc kubenswrapper[5001]: I0128 17:31:22.972522 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:23 crc kubenswrapper[5001]: I0128 17:31:23.416258 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdjm7"] Jan 28 17:31:23 crc kubenswrapper[5001]: W0128 17:31:23.420673 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f058ab_5c5c_42f5_9b08_52a42d8a5fd4.slice/crio-f8a7be98ac3a5cba80e7eae6096fb91f5042e4008c11086ac342713c2aff090d WatchSource:0}: Error finding container f8a7be98ac3a5cba80e7eae6096fb91f5042e4008c11086ac342713c2aff090d: Status 404 returned error can't find the container with id f8a7be98ac3a5cba80e7eae6096fb91f5042e4008c11086ac342713c2aff090d Jan 28 17:31:23 crc kubenswrapper[5001]: I0128 17:31:23.602159 5001 generic.go:334] "Generic (PLEG): container finished" podID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerID="08511828d39a9d8a9e951f5a3868027cd8149b18d8da05b133d6b4e3099d1590" exitCode=0 Jan 28 17:31:23 crc kubenswrapper[5001]: I0128 17:31:23.602270 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdjm7" event={"ID":"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4","Type":"ContainerDied","Data":"08511828d39a9d8a9e951f5a3868027cd8149b18d8da05b133d6b4e3099d1590"} Jan 28 17:31:23 crc kubenswrapper[5001]: I0128 17:31:23.602563 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdjm7" event={"ID":"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4","Type":"ContainerStarted","Data":"f8a7be98ac3a5cba80e7eae6096fb91f5042e4008c11086ac342713c2aff090d"} Jan 28 17:31:24 crc kubenswrapper[5001]: I0128 17:31:24.989993 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8w8w"] Jan 28 17:31:25 crc kubenswrapper[5001]: I0128 17:31:25.613570 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g8w8w" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="registry-server" containerID="cri-o://7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0" gracePeriod=2 Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.579218 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.627585 5001 generic.go:334] "Generic (PLEG): container finished" podID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerID="dec74f7264c201bdfed1d94dc9a33a846ddc93eda17d8617f896eec209b7e069" exitCode=0 Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.627676 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdjm7" event={"ID":"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4","Type":"ContainerDied","Data":"dec74f7264c201bdfed1d94dc9a33a846ddc93eda17d8617f896eec209b7e069"} Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.634606 5001 generic.go:334] "Generic (PLEG): container finished" podID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerID="7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0" exitCode=0 Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.634636 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerDied","Data":"7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0"} Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.634677 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8w8w" event={"ID":"8a9885bc-111a-4e49-bf90-4df20f5a486d","Type":"ContainerDied","Data":"c7f80a04f61b5ce6876d599f27880b62192b6366d2795c3470f4dcdde53e42c3"} Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.634689 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8w8w" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.634698 5001 scope.go:117] "RemoveContainer" containerID="7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.658765 5001 scope.go:117] "RemoveContainer" containerID="d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.673792 5001 scope.go:117] "RemoveContainer" containerID="b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.699496 5001 scope.go:117] "RemoveContainer" containerID="7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0" Jan 28 17:31:27 crc kubenswrapper[5001]: E0128 17:31:27.699935 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0\": container with ID starting with 7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0 not found: ID does not exist" containerID="7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.700018 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0"} err="failed to get container status \"7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0\": rpc error: code = NotFound desc = could not find container \"7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0\": container with ID starting with 7af9f8e1659f66c803ba59b770fcfdbdf5fea342a773b18679578fb96d2dfdf0 not found: ID does not exist" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.700045 5001 scope.go:117] "RemoveContainer" containerID="d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37" Jan 28 17:31:27 crc kubenswrapper[5001]: E0128 17:31:27.700464 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37\": container with ID starting with d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37 not found: ID does not exist" containerID="d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.700485 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37"} err="failed to get container status \"d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37\": rpc error: code = NotFound desc = could not find container \"d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37\": container with ID starting with d7177ab2fda0b7142d53940bfdd5350811f4af94c2553a983b139304cb306c37 not found: ID does not exist" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.700500 5001 scope.go:117] "RemoveContainer" containerID="b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f" Jan 28 17:31:27 crc kubenswrapper[5001]: E0128 17:31:27.700778 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f\": container with ID starting with b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f not found: ID does not exist" containerID="b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.700801 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f"} err="failed to get container status \"b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f\": rpc error: code = NotFound desc = could not find container \"b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f\": container with ID starting with b8717132d364b9e21873b2545bd5f4263102a578772c19cab470b0b93808ea9f not found: ID does not exist" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.720848 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-catalog-content\") pod \"8a9885bc-111a-4e49-bf90-4df20f5a486d\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.720888 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-utilities\") pod \"8a9885bc-111a-4e49-bf90-4df20f5a486d\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.720922 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvh76\" (UniqueName: \"kubernetes.io/projected/8a9885bc-111a-4e49-bf90-4df20f5a486d-kube-api-access-gvh76\") pod \"8a9885bc-111a-4e49-bf90-4df20f5a486d\" (UID: \"8a9885bc-111a-4e49-bf90-4df20f5a486d\") " Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.722467 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-utilities" (OuterVolumeSpecName: "utilities") pod "8a9885bc-111a-4e49-bf90-4df20f5a486d" (UID: "8a9885bc-111a-4e49-bf90-4df20f5a486d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.728217 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9885bc-111a-4e49-bf90-4df20f5a486d-kube-api-access-gvh76" (OuterVolumeSpecName: "kube-api-access-gvh76") pod "8a9885bc-111a-4e49-bf90-4df20f5a486d" (UID: "8a9885bc-111a-4e49-bf90-4df20f5a486d"). InnerVolumeSpecName "kube-api-access-gvh76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.769750 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a9885bc-111a-4e49-bf90-4df20f5a486d" (UID: "8a9885bc-111a-4e49-bf90-4df20f5a486d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.822088 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.822122 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a9885bc-111a-4e49-bf90-4df20f5a486d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.822131 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvh76\" (UniqueName: \"kubernetes.io/projected/8a9885bc-111a-4e49-bf90-4df20f5a486d-kube-api-access-gvh76\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.962991 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8w8w"] Jan 28 17:31:27 crc kubenswrapper[5001]: I0128 17:31:27.967446 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g8w8w"] Jan 28 17:31:28 crc kubenswrapper[5001]: I0128 17:31:28.600711 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" path="/var/lib/kubelet/pods/8a9885bc-111a-4e49-bf90-4df20f5a486d/volumes" Jan 28 17:31:28 crc kubenswrapper[5001]: I0128 17:31:28.642475 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdjm7" event={"ID":"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4","Type":"ContainerStarted","Data":"c8f1695cf7bf1570c12c95851b2438992e69522601572901ef81efb869053193"} Jan 28 17:31:28 crc kubenswrapper[5001]: I0128 17:31:28.657186 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vdjm7" podStartSLOduration=2.252610461 podStartE2EDuration="6.65716737s" podCreationTimestamp="2026-01-28 17:31:22 +0000 UTC" firstStartedPulling="2026-01-28 17:31:23.603582197 +0000 UTC m=+929.771370427" lastFinishedPulling="2026-01-28 17:31:28.008139116 +0000 UTC m=+934.175927336" observedRunningTime="2026-01-28 17:31:28.656406348 +0000 UTC m=+934.824194588" watchObservedRunningTime="2026-01-28 17:31:28.65716737 +0000 UTC m=+934.824955600" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.197831 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-49ldn"] Jan 28 17:31:30 crc kubenswrapper[5001]: E0128 17:31:30.199853 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="extract-content" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.199881 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="extract-content" Jan 28 17:31:30 crc kubenswrapper[5001]: E0128 17:31:30.199898 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="extract-utilities" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.199907 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="extract-utilities" Jan 28 17:31:30 crc kubenswrapper[5001]: E0128 17:31:30.199925 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="registry-server" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.199933 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="registry-server" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.200088 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9885bc-111a-4e49-bf90-4df20f5a486d" containerName="registry-server" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.200680 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.202964 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fgh6r" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.203081 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.203222 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.205103 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-49ldn"] Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.357613 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvhd9\" (UniqueName: \"kubernetes.io/projected/13e9dd0f-e7c5-4959-9554-bf34549222cf-kube-api-access-rvhd9\") pod \"openstack-operator-index-49ldn\" (UID: \"13e9dd0f-e7c5-4959-9554-bf34549222cf\") " pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.458823 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvhd9\" (UniqueName: \"kubernetes.io/projected/13e9dd0f-e7c5-4959-9554-bf34549222cf-kube-api-access-rvhd9\") pod \"openstack-operator-index-49ldn\" (UID: \"13e9dd0f-e7c5-4959-9554-bf34549222cf\") " pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.474372 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvhd9\" (UniqueName: \"kubernetes.io/projected/13e9dd0f-e7c5-4959-9554-bf34549222cf-kube-api-access-rvhd9\") pod \"openstack-operator-index-49ldn\" (UID: \"13e9dd0f-e7c5-4959-9554-bf34549222cf\") " pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.518381 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:30 crc kubenswrapper[5001]: I0128 17:31:30.729637 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-49ldn"] Jan 28 17:31:30 crc kubenswrapper[5001]: W0128 17:31:30.731124 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13e9dd0f_e7c5_4959_9554_bf34549222cf.slice/crio-24fa07f3bde8df3dbf1420ac91fa86207c5ad6b8b2919c77d2b1c903afc4777e WatchSource:0}: Error finding container 24fa07f3bde8df3dbf1420ac91fa86207c5ad6b8b2919c77d2b1c903afc4777e: Status 404 returned error can't find the container with id 24fa07f3bde8df3dbf1420ac91fa86207c5ad6b8b2919c77d2b1c903afc4777e Jan 28 17:31:31 crc kubenswrapper[5001]: I0128 17:31:31.680520 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-49ldn" event={"ID":"13e9dd0f-e7c5-4959-9554-bf34549222cf","Type":"ContainerStarted","Data":"24fa07f3bde8df3dbf1420ac91fa86207c5ad6b8b2919c77d2b1c903afc4777e"} Jan 28 17:31:32 crc kubenswrapper[5001]: I0128 17:31:32.972888 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:32 crc kubenswrapper[5001]: I0128 17:31:32.972941 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:33 crc kubenswrapper[5001]: I0128 17:31:33.015086 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:33 crc kubenswrapper[5001]: I0128 17:31:33.734740 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:34 crc kubenswrapper[5001]: I0128 17:31:34.702459 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-49ldn" event={"ID":"13e9dd0f-e7c5-4959-9554-bf34549222cf","Type":"ContainerStarted","Data":"d29a554686f054715981f3f027f52ec5552b2b0685aee36eee2ccc3ff69540d0"} Jan 28 17:31:34 crc kubenswrapper[5001]: I0128 17:31:34.721626 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-49ldn" podStartSLOduration=1.491373801 podStartE2EDuration="4.721607427s" podCreationTimestamp="2026-01-28 17:31:30 +0000 UTC" firstStartedPulling="2026-01-28 17:31:30.73334356 +0000 UTC m=+936.901131790" lastFinishedPulling="2026-01-28 17:31:33.963577186 +0000 UTC m=+940.131365416" observedRunningTime="2026-01-28 17:31:34.71618181 +0000 UTC m=+940.883970050" watchObservedRunningTime="2026-01-28 17:31:34.721607427 +0000 UTC m=+940.889395657" Jan 28 17:31:35 crc kubenswrapper[5001]: I0128 17:31:35.391331 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdjm7"] Jan 28 17:31:35 crc kubenswrapper[5001]: I0128 17:31:35.707914 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vdjm7" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="registry-server" containerID="cri-o://c8f1695cf7bf1570c12c95851b2438992e69522601572901ef81efb869053193" gracePeriod=2 Jan 28 17:31:36 crc kubenswrapper[5001]: I0128 17:31:36.715933 5001 generic.go:334] "Generic (PLEG): container finished" podID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerID="c8f1695cf7bf1570c12c95851b2438992e69522601572901ef81efb869053193" exitCode=0 Jan 28 17:31:36 crc kubenswrapper[5001]: I0128 17:31:36.716051 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdjm7" event={"ID":"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4","Type":"ContainerDied","Data":"c8f1695cf7bf1570c12c95851b2438992e69522601572901ef81efb869053193"} Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.557285 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.593863 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2rhr\" (UniqueName: \"kubernetes.io/projected/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-kube-api-access-t2rhr\") pod \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.594254 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-catalog-content\") pod \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.594342 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-utilities\") pod \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\" (UID: \"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4\") " Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.595294 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-utilities" (OuterVolumeSpecName: "utilities") pod "82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" (UID: "82f058ab-5c5c-42f5-9b08-52a42d8a5fd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.600569 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-kube-api-access-t2rhr" (OuterVolumeSpecName: "kube-api-access-t2rhr") pod "82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" (UID: "82f058ab-5c5c-42f5-9b08-52a42d8a5fd4"). InnerVolumeSpecName "kube-api-access-t2rhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.616699 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" (UID: "82f058ab-5c5c-42f5-9b08-52a42d8a5fd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.695228 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2rhr\" (UniqueName: \"kubernetes.io/projected/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-kube-api-access-t2rhr\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.695268 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.695277 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.731851 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdjm7" event={"ID":"82f058ab-5c5c-42f5-9b08-52a42d8a5fd4","Type":"ContainerDied","Data":"f8a7be98ac3a5cba80e7eae6096fb91f5042e4008c11086ac342713c2aff090d"} Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.731890 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdjm7" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.731942 5001 scope.go:117] "RemoveContainer" containerID="c8f1695cf7bf1570c12c95851b2438992e69522601572901ef81efb869053193" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.752909 5001 scope.go:117] "RemoveContainer" containerID="dec74f7264c201bdfed1d94dc9a33a846ddc93eda17d8617f896eec209b7e069" Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.764527 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdjm7"] Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.769816 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdjm7"] Jan 28 17:31:38 crc kubenswrapper[5001]: I0128 17:31:38.774752 5001 scope.go:117] "RemoveContainer" containerID="08511828d39a9d8a9e951f5a3868027cd8149b18d8da05b133d6b4e3099d1590" Jan 28 17:31:40 crc kubenswrapper[5001]: I0128 17:31:40.518606 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:40 crc kubenswrapper[5001]: I0128 17:31:40.519044 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:40 crc kubenswrapper[5001]: I0128 17:31:40.551156 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:40 crc kubenswrapper[5001]: I0128 17:31:40.603025 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" path="/var/lib/kubelet/pods/82f058ab-5c5c-42f5-9b08-52a42d8a5fd4/volumes" Jan 28 17:31:40 crc kubenswrapper[5001]: I0128 17:31:40.766120 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-49ldn" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.421071 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8"] Jan 28 17:31:43 crc kubenswrapper[5001]: E0128 17:31:43.421322 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="extract-content" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.421334 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="extract-content" Jan 28 17:31:43 crc kubenswrapper[5001]: E0128 17:31:43.421346 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="registry-server" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.421352 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="registry-server" Jan 28 17:31:43 crc kubenswrapper[5001]: E0128 17:31:43.421371 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="extract-utilities" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.421377 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="extract-utilities" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.421475 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="82f058ab-5c5c-42f5-9b08-52a42d8a5fd4" containerName="registry-server" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.422336 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.425322 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hrxwm" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.436959 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8"] Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.462682 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-util\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.462746 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qlhk\" (UniqueName: \"kubernetes.io/projected/93ddfb0d-4440-4560-b5c5-3e252576ef02-kube-api-access-2qlhk\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.462793 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-bundle\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.564486 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-util\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.564567 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qlhk\" (UniqueName: \"kubernetes.io/projected/93ddfb0d-4440-4560-b5c5-3e252576ef02-kube-api-access-2qlhk\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.564623 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-bundle\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.565176 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-util\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.565206 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-bundle\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.587584 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qlhk\" (UniqueName: \"kubernetes.io/projected/93ddfb0d-4440-4560-b5c5-3e252576ef02-kube-api-access-2qlhk\") pod \"2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:43 crc kubenswrapper[5001]: I0128 17:31:43.742520 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:44 crc kubenswrapper[5001]: I0128 17:31:44.132326 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8"] Jan 28 17:31:44 crc kubenswrapper[5001]: I0128 17:31:44.781851 5001 generic.go:334] "Generic (PLEG): container finished" podID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerID="9e5f7f2c17ea1e147baf23badc2f15ce2204a30b17a35a5e4603d3b724db050e" exitCode=0 Jan 28 17:31:44 crc kubenswrapper[5001]: I0128 17:31:44.782132 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" event={"ID":"93ddfb0d-4440-4560-b5c5-3e252576ef02","Type":"ContainerDied","Data":"9e5f7f2c17ea1e147baf23badc2f15ce2204a30b17a35a5e4603d3b724db050e"} Jan 28 17:31:44 crc kubenswrapper[5001]: I0128 17:31:44.782161 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" event={"ID":"93ddfb0d-4440-4560-b5c5-3e252576ef02","Type":"ContainerStarted","Data":"2dbd7ec5c4d0ccbc8baae366a557cde2197028bda10c31a10f28957e4734fd43"} Jan 28 17:31:47 crc kubenswrapper[5001]: I0128 17:31:47.801291 5001 generic.go:334] "Generic (PLEG): container finished" podID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerID="ab93228c8ddd7a83f073bdaa7dfad18b78158cd6ef8040ec864b83e33657fc62" exitCode=0 Jan 28 17:31:47 crc kubenswrapper[5001]: I0128 17:31:47.801398 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" event={"ID":"93ddfb0d-4440-4560-b5c5-3e252576ef02","Type":"ContainerDied","Data":"ab93228c8ddd7a83f073bdaa7dfad18b78158cd6ef8040ec864b83e33657fc62"} Jan 28 17:31:48 crc kubenswrapper[5001]: I0128 17:31:48.810468 5001 generic.go:334] "Generic (PLEG): container finished" podID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerID="0aae0c69572d15f6b8472107017235c94e6954db5c242cacc5a5fdaa611215d0" exitCode=0 Jan 28 17:31:48 crc kubenswrapper[5001]: I0128 17:31:48.810571 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" event={"ID":"93ddfb0d-4440-4560-b5c5-3e252576ef02","Type":"ContainerDied","Data":"0aae0c69572d15f6b8472107017235c94e6954db5c242cacc5a5fdaa611215d0"} Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.036330 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.152561 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-bundle\") pod \"93ddfb0d-4440-4560-b5c5-3e252576ef02\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.152660 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-util\") pod \"93ddfb0d-4440-4560-b5c5-3e252576ef02\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.152720 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qlhk\" (UniqueName: \"kubernetes.io/projected/93ddfb0d-4440-4560-b5c5-3e252576ef02-kube-api-access-2qlhk\") pod \"93ddfb0d-4440-4560-b5c5-3e252576ef02\" (UID: \"93ddfb0d-4440-4560-b5c5-3e252576ef02\") " Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.153335 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-bundle" (OuterVolumeSpecName: "bundle") pod "93ddfb0d-4440-4560-b5c5-3e252576ef02" (UID: "93ddfb0d-4440-4560-b5c5-3e252576ef02"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.158159 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ddfb0d-4440-4560-b5c5-3e252576ef02-kube-api-access-2qlhk" (OuterVolumeSpecName: "kube-api-access-2qlhk") pod "93ddfb0d-4440-4560-b5c5-3e252576ef02" (UID: "93ddfb0d-4440-4560-b5c5-3e252576ef02"). InnerVolumeSpecName "kube-api-access-2qlhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.198717 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-util" (OuterVolumeSpecName: "util") pod "93ddfb0d-4440-4560-b5c5-3e252576ef02" (UID: "93ddfb0d-4440-4560-b5c5-3e252576ef02"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.254638 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qlhk\" (UniqueName: \"kubernetes.io/projected/93ddfb0d-4440-4560-b5c5-3e252576ef02-kube-api-access-2qlhk\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.254687 5001 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.254698 5001 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93ddfb0d-4440-4560-b5c5-3e252576ef02-util\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.835216 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" event={"ID":"93ddfb0d-4440-4560-b5c5-3e252576ef02","Type":"ContainerDied","Data":"2dbd7ec5c4d0ccbc8baae366a557cde2197028bda10c31a10f28957e4734fd43"} Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.835311 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbd7ec5c4d0ccbc8baae366a557cde2197028bda10c31a10f28957e4734fd43" Jan 28 17:31:50 crc kubenswrapper[5001]: I0128 17:31:50.835487 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.197561 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w"] Jan 28 17:31:53 crc kubenswrapper[5001]: E0128 17:31:53.198096 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="util" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.198109 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="util" Jan 28 17:31:53 crc kubenswrapper[5001]: E0128 17:31:53.198119 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="pull" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.198125 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="pull" Jan 28 17:31:53 crc kubenswrapper[5001]: E0128 17:31:53.198145 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="extract" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.198151 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="extract" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.198259 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ddfb0d-4440-4560-b5c5-3e252576ef02" containerName="extract" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.198644 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.200781 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-djp5t" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.228777 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w"] Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.300301 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg9f\" (UniqueName: \"kubernetes.io/projected/b17f77be-35c8-4b24-945a-7f9c10a4c78a-kube-api-access-kwg9f\") pod \"openstack-operator-controller-init-679d48b6f-nfb9w\" (UID: \"b17f77be-35c8-4b24-945a-7f9c10a4c78a\") " pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.402133 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg9f\" (UniqueName: \"kubernetes.io/projected/b17f77be-35c8-4b24-945a-7f9c10a4c78a-kube-api-access-kwg9f\") pod \"openstack-operator-controller-init-679d48b6f-nfb9w\" (UID: \"b17f77be-35c8-4b24-945a-7f9c10a4c78a\") " pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.423452 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg9f\" (UniqueName: \"kubernetes.io/projected/b17f77be-35c8-4b24-945a-7f9c10a4c78a-kube-api-access-kwg9f\") pod \"openstack-operator-controller-init-679d48b6f-nfb9w\" (UID: \"b17f77be-35c8-4b24-945a-7f9c10a4c78a\") " pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.518311 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.748015 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w"] Jan 28 17:31:53 crc kubenswrapper[5001]: I0128 17:31:53.856882 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" event={"ID":"b17f77be-35c8-4b24-945a-7f9c10a4c78a","Type":"ContainerStarted","Data":"a7b47adc575dedfe3eb58a5de866b18ffdf9d1cd79c9496dac82210e5fe64e61"} Jan 28 17:31:57 crc kubenswrapper[5001]: I0128 17:31:57.885795 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" event={"ID":"b17f77be-35c8-4b24-945a-7f9c10a4c78a","Type":"ContainerStarted","Data":"041a5c90a6f37f61bedae903f321c78393c762b6403cecc8ec1eecb64c21bf46"} Jan 28 17:31:57 crc kubenswrapper[5001]: I0128 17:31:57.886472 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:31:57 crc kubenswrapper[5001]: I0128 17:31:57.915918 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" podStartSLOduration=1.019520042 podStartE2EDuration="4.915900966s" podCreationTimestamp="2026-01-28 17:31:53 +0000 UTC" firstStartedPulling="2026-01-28 17:31:53.760535209 +0000 UTC m=+959.928323439" lastFinishedPulling="2026-01-28 17:31:57.656916133 +0000 UTC m=+963.824704363" observedRunningTime="2026-01-28 17:31:57.911496919 +0000 UTC m=+964.079285149" watchObservedRunningTime="2026-01-28 17:31:57.915900966 +0000 UTC m=+964.083689196" Jan 28 17:32:03 crc kubenswrapper[5001]: I0128 17:32:03.522379 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.654889 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bb6js"] Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.656674 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.670167 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bb6js"] Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.765023 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/459effeb-5d45-4ff0-92ec-cbd95f88d17c-utilities\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.765116 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/459effeb-5d45-4ff0-92ec-cbd95f88d17c-catalog-content\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.765148 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4z2\" (UniqueName: \"kubernetes.io/projected/459effeb-5d45-4ff0-92ec-cbd95f88d17c-kube-api-access-9w4z2\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.866590 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/459effeb-5d45-4ff0-92ec-cbd95f88d17c-utilities\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.866712 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/459effeb-5d45-4ff0-92ec-cbd95f88d17c-catalog-content\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.866773 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4z2\" (UniqueName: \"kubernetes.io/projected/459effeb-5d45-4ff0-92ec-cbd95f88d17c-kube-api-access-9w4z2\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.867157 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/459effeb-5d45-4ff0-92ec-cbd95f88d17c-catalog-content\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.867216 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/459effeb-5d45-4ff0-92ec-cbd95f88d17c-utilities\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.895345 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4z2\" (UniqueName: \"kubernetes.io/projected/459effeb-5d45-4ff0-92ec-cbd95f88d17c-kube-api-access-9w4z2\") pod \"community-operators-bb6js\" (UID: \"459effeb-5d45-4ff0-92ec-cbd95f88d17c\") " pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:05 crc kubenswrapper[5001]: I0128 17:32:05.975038 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:06 crc kubenswrapper[5001]: I0128 17:32:06.280963 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bb6js"] Jan 28 17:32:06 crc kubenswrapper[5001]: I0128 17:32:06.942493 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bb6js" event={"ID":"459effeb-5d45-4ff0-92ec-cbd95f88d17c","Type":"ContainerStarted","Data":"f7f73b05ca8d4d377b148761374d7a275154f749c6b3622e2fa66554fc01c4c2"} Jan 28 17:32:07 crc kubenswrapper[5001]: I0128 17:32:07.950105 5001 generic.go:334] "Generic (PLEG): container finished" podID="459effeb-5d45-4ff0-92ec-cbd95f88d17c" containerID="11179890ed1d441fb33fd9258bc415f68904e27567281dd4d87c9b092e725861" exitCode=0 Jan 28 17:32:07 crc kubenswrapper[5001]: I0128 17:32:07.950200 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bb6js" event={"ID":"459effeb-5d45-4ff0-92ec-cbd95f88d17c","Type":"ContainerDied","Data":"11179890ed1d441fb33fd9258bc415f68904e27567281dd4d87c9b092e725861"} Jan 28 17:32:13 crc kubenswrapper[5001]: I0128 17:32:13.989446 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bb6js" event={"ID":"459effeb-5d45-4ff0-92ec-cbd95f88d17c","Type":"ContainerStarted","Data":"a9a6adc2ff06d879bd8fc06c2cf293f8103712a2289d4d18203d8052eff6a4c1"} Jan 28 17:32:14 crc kubenswrapper[5001]: I0128 17:32:14.996273 5001 generic.go:334] "Generic (PLEG): container finished" podID="459effeb-5d45-4ff0-92ec-cbd95f88d17c" containerID="a9a6adc2ff06d879bd8fc06c2cf293f8103712a2289d4d18203d8052eff6a4c1" exitCode=0 Jan 28 17:32:14 crc kubenswrapper[5001]: I0128 17:32:14.996330 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bb6js" event={"ID":"459effeb-5d45-4ff0-92ec-cbd95f88d17c","Type":"ContainerDied","Data":"a9a6adc2ff06d879bd8fc06c2cf293f8103712a2289d4d18203d8052eff6a4c1"} Jan 28 17:32:16 crc kubenswrapper[5001]: I0128 17:32:16.005456 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bb6js" event={"ID":"459effeb-5d45-4ff0-92ec-cbd95f88d17c","Type":"ContainerStarted","Data":"d1d5e82f5a590683ab7acff7108e739dfee38ce0605f7b95ef9060ca58eda717"} Jan 28 17:32:16 crc kubenswrapper[5001]: I0128 17:32:16.031931 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bb6js" podStartSLOduration=3.491524102 podStartE2EDuration="11.03191277s" podCreationTimestamp="2026-01-28 17:32:05 +0000 UTC" firstStartedPulling="2026-01-28 17:32:07.952080993 +0000 UTC m=+974.119869223" lastFinishedPulling="2026-01-28 17:32:15.492469661 +0000 UTC m=+981.660257891" observedRunningTime="2026-01-28 17:32:16.02709189 +0000 UTC m=+982.194880130" watchObservedRunningTime="2026-01-28 17:32:16.03191277 +0000 UTC m=+982.199701000" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.569435 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.571105 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.571756 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.572236 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.574675 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jmh2q" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.574879 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-t86k4" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.579332 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.599890 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.600848 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.602326 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6mrk8" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.608815 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.609758 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.616745 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-z5gtt" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.627334 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.645484 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.646426 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.650475 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dthq7" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.658170 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.668122 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.688095 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.689118 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.691688 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hx5ck" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.695878 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-jm966"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.697152 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.699118 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jxkbc" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.699208 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.703893 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568m9\" (UniqueName: \"kubernetes.io/projected/44247ccc-08d6-4c04-ae14-7595add07217-kube-api-access-568m9\") pod \"cinder-operator-controller-manager-7478f7dbf9-pvh7q\" (UID: \"44247ccc-08d6-4c04-ae14-7595add07217\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.704020 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcltq\" (UniqueName: \"kubernetes.io/projected/cff5af3e-db62-41be-b49c-8df7ea7a015a-kube-api-access-zcltq\") pod \"designate-operator-controller-manager-b45d7bf98-jncbk\" (UID: \"cff5af3e-db62-41be-b49c-8df7ea7a015a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.704070 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qc8d\" (UniqueName: \"kubernetes.io/projected/842efc7d-25a3-4383-9ca0-a3d2e101990a-kube-api-access-8qc8d\") pod \"barbican-operator-controller-manager-7f86f8796f-jvbp8\" (UID: \"842efc7d-25a3-4383-9ca0-a3d2e101990a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.704358 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.715745 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.716832 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.719889 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dlz9g" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.725308 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.739023 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.739748 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.746797 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.752719 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-tpzjz" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.754550 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-jm966"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.764253 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.779661 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.780644 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.785010 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v9dms" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.811962 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.823261 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wvm\" (UniqueName: \"kubernetes.io/projected/0311571f-c23c-4554-8763-a3daced65fc8-kube-api-access-h8wvm\") pod \"horizon-operator-controller-manager-77d5c5b54f-dt2m9\" (UID: \"0311571f-c23c-4554-8763-a3daced65fc8\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.823355 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcltq\" (UniqueName: \"kubernetes.io/projected/cff5af3e-db62-41be-b49c-8df7ea7a015a-kube-api-access-zcltq\") pod \"designate-operator-controller-manager-b45d7bf98-jncbk\" (UID: \"cff5af3e-db62-41be-b49c-8df7ea7a015a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.823442 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hm7\" (UniqueName: \"kubernetes.io/projected/f3751f38-96d6-42a8-98da-05cfbd294fb5-kube-api-access-v5hm7\") pod \"heat-operator-controller-manager-594c8c9d5d-rpd8k\" (UID: \"f3751f38-96d6-42a8-98da-05cfbd294fb5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.823625 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qc8d\" (UniqueName: \"kubernetes.io/projected/842efc7d-25a3-4383-9ca0-a3d2e101990a-kube-api-access-8qc8d\") pod \"barbican-operator-controller-manager-7f86f8796f-jvbp8\" (UID: \"842efc7d-25a3-4383-9ca0-a3d2e101990a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.823699 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnk5h\" (UniqueName: \"kubernetes.io/projected/1f3f3b33-d586-448c-a967-fcd03c6fb11d-kube-api-access-bnk5h\") pod \"glance-operator-controller-manager-78fdd796fd-cv7jq\" (UID: \"1f3f3b33-d586-448c-a967-fcd03c6fb11d\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.823769 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdqj\" (UniqueName: \"kubernetes.io/projected/42016b42-c753-4265-9902-c2969117ad64-kube-api-access-9kdqj\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.824447 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-568m9\" (UniqueName: \"kubernetes.io/projected/44247ccc-08d6-4c04-ae14-7595add07217-kube-api-access-568m9\") pod \"cinder-operator-controller-manager-7478f7dbf9-pvh7q\" (UID: \"44247ccc-08d6-4c04-ae14-7595add07217\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.824485 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.890901 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-568m9\" (UniqueName: \"kubernetes.io/projected/44247ccc-08d6-4c04-ae14-7595add07217-kube-api-access-568m9\") pod \"cinder-operator-controller-manager-7478f7dbf9-pvh7q\" (UID: \"44247ccc-08d6-4c04-ae14-7595add07217\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.892533 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcltq\" (UniqueName: \"kubernetes.io/projected/cff5af3e-db62-41be-b49c-8df7ea7a015a-kube-api-access-zcltq\") pod \"designate-operator-controller-manager-b45d7bf98-jncbk\" (UID: \"cff5af3e-db62-41be-b49c-8df7ea7a015a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.895566 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qc8d\" (UniqueName: \"kubernetes.io/projected/842efc7d-25a3-4383-9ca0-a3d2e101990a-kube-api-access-8qc8d\") pod \"barbican-operator-controller-manager-7f86f8796f-jvbp8\" (UID: \"842efc7d-25a3-4383-9ca0-a3d2e101990a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.912456 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.916684 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.917331 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.917809 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.918422 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.933813 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vtr\" (UniqueName: \"kubernetes.io/projected/77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d-kube-api-access-x7vtr\") pod \"ironic-operator-controller-manager-598f7747c9-twmcs\" (UID: \"77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.933869 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8wvm\" (UniqueName: \"kubernetes.io/projected/0311571f-c23c-4554-8763-a3daced65fc8-kube-api-access-h8wvm\") pod \"horizon-operator-controller-manager-77d5c5b54f-dt2m9\" (UID: \"0311571f-c23c-4554-8763-a3daced65fc8\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.933933 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhgk\" (UniqueName: \"kubernetes.io/projected/0d72532d-5aac-40f4-b308-4ac21a287e81-kube-api-access-9lhgk\") pod \"keystone-operator-controller-manager-b8b6d4659-p6lff\" (UID: \"0d72532d-5aac-40f4-b308-4ac21a287e81\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.933961 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hm7\" (UniqueName: \"kubernetes.io/projected/f3751f38-96d6-42a8-98da-05cfbd294fb5-kube-api-access-v5hm7\") pod \"heat-operator-controller-manager-594c8c9d5d-rpd8k\" (UID: \"f3751f38-96d6-42a8-98da-05cfbd294fb5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.933992 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8sms\" (UniqueName: \"kubernetes.io/projected/2049f024-4549-43c1-b3ea-c42b38ade539-kube-api-access-t8sms\") pod \"manila-operator-controller-manager-78c6999f6f-qrfs7\" (UID: \"2049f024-4549-43c1-b3ea-c42b38ade539\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.934021 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnk5h\" (UniqueName: \"kubernetes.io/projected/1f3f3b33-d586-448c-a967-fcd03c6fb11d-kube-api-access-bnk5h\") pod \"glance-operator-controller-manager-78fdd796fd-cv7jq\" (UID: \"1f3f3b33-d586-448c-a967-fcd03c6fb11d\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.934041 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdqj\" (UniqueName: \"kubernetes.io/projected/42016b42-c753-4265-9902-c2969117ad64-kube-api-access-9kdqj\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.934090 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.934585 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:23 crc kubenswrapper[5001]: E0128 17:32:23.936577 5001 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:23 crc kubenswrapper[5001]: E0128 17:32:23.936639 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert podName:42016b42-c753-4265-9902-c2969117ad64 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:24.436620184 +0000 UTC m=+990.604408414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert") pod "infra-operator-controller-manager-694cf4f878-jm966" (UID: "42016b42-c753-4265-9902-c2969117ad64") : secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.939180 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.952077 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-b4ttw" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.952323 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8k2mc" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.952778 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.975925 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdqj\" (UniqueName: \"kubernetes.io/projected/42016b42-c753-4265-9902-c2969117ad64-kube-api-access-9kdqj\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.985720 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hm7\" (UniqueName: \"kubernetes.io/projected/f3751f38-96d6-42a8-98da-05cfbd294fb5-kube-api-access-v5hm7\") pod \"heat-operator-controller-manager-594c8c9d5d-rpd8k\" (UID: \"f3751f38-96d6-42a8-98da-05cfbd294fb5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.985896 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.985960 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct"] Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.986502 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:23 crc kubenswrapper[5001]: I0128 17:32:23.986964 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:23.997605 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8wvm\" (UniqueName: \"kubernetes.io/projected/0311571f-c23c-4554-8763-a3daced65fc8-kube-api-access-h8wvm\") pod \"horizon-operator-controller-manager-77d5c5b54f-dt2m9\" (UID: \"0311571f-c23c-4554-8763-a3daced65fc8\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:23.997665 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.005308 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k8xdz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.019032 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.019853 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.023367 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.034167 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.035131 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.037581 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vtr\" (UniqueName: \"kubernetes.io/projected/77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d-kube-api-access-x7vtr\") pod \"ironic-operator-controller-manager-598f7747c9-twmcs\" (UID: \"77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.037623 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhgk\" (UniqueName: \"kubernetes.io/projected/0d72532d-5aac-40f4-b308-4ac21a287e81-kube-api-access-9lhgk\") pod \"keystone-operator-controller-manager-b8b6d4659-p6lff\" (UID: \"0d72532d-5aac-40f4-b308-4ac21a287e81\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.037656 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8sms\" (UniqueName: \"kubernetes.io/projected/2049f024-4549-43c1-b3ea-c42b38ade539-kube-api-access-t8sms\") pod \"manila-operator-controller-manager-78c6999f6f-qrfs7\" (UID: \"2049f024-4549-43c1-b3ea-c42b38ade539\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.037751 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkk2\" (UniqueName: \"kubernetes.io/projected/660281b0-2db3-4f96-a8c5-69c0ca0a5072-kube-api-access-trkk2\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-6kwks\" (UID: \"660281b0-2db3-4f96-a8c5-69c0ca0a5072\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.037781 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn545\" (UniqueName: \"kubernetes.io/projected/730865cc-5b68-4c45-927b-8a5fee90c539-kube-api-access-rn545\") pod \"neutron-operator-controller-manager-78d58447c5-c44sh\" (UID: \"730865cc-5b68-4c45-927b-8a5fee90c539\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.054278 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.057331 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2jt7j" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.057561 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.057672 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vq7zk" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.057743 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnk5h\" (UniqueName: \"kubernetes.io/projected/1f3f3b33-d586-448c-a967-fcd03c6fb11d-kube-api-access-bnk5h\") pod \"glance-operator-controller-manager-78fdd796fd-cv7jq\" (UID: \"1f3f3b33-d586-448c-a967-fcd03c6fb11d\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.069050 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.069883 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.084686 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-c27p4" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.085599 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhgk\" (UniqueName: \"kubernetes.io/projected/0d72532d-5aac-40f4-b308-4ac21a287e81-kube-api-access-9lhgk\") pod \"keystone-operator-controller-manager-b8b6d4659-p6lff\" (UID: \"0d72532d-5aac-40f4-b308-4ac21a287e81\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.089208 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.105181 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vtr\" (UniqueName: \"kubernetes.io/projected/77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d-kube-api-access-x7vtr\") pod \"ironic-operator-controller-manager-598f7747c9-twmcs\" (UID: \"77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.108712 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8sms\" (UniqueName: \"kubernetes.io/projected/2049f024-4549-43c1-b3ea-c42b38ade539-kube-api-access-t8sms\") pod \"manila-operator-controller-manager-78c6999f6f-qrfs7\" (UID: \"2049f024-4549-43c1-b3ea-c42b38ade539\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.118581 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.119598 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.121513 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7n76h" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.140798 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2kz\" (UniqueName: \"kubernetes.io/projected/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-kube-api-access-dg2kz\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.140881 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trkk2\" (UniqueName: \"kubernetes.io/projected/660281b0-2db3-4f96-a8c5-69c0ca0a5072-kube-api-access-trkk2\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-6kwks\" (UID: \"660281b0-2db3-4f96-a8c5-69c0ca0a5072\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.140922 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.140946 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn545\" (UniqueName: \"kubernetes.io/projected/730865cc-5b68-4c45-927b-8a5fee90c539-kube-api-access-rn545\") pod \"neutron-operator-controller-manager-78d58447c5-c44sh\" (UID: \"730865cc-5b68-4c45-927b-8a5fee90c539\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.140998 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmwn\" (UniqueName: \"kubernetes.io/projected/c394fabc-a9e5-4e6b-81bb-511228e8c0fb-kube-api-access-pdmwn\") pod \"octavia-operator-controller-manager-5f4cd88d46-l44l6\" (UID: \"c394fabc-a9e5-4e6b-81bb-511228e8c0fb\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.141030 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4csbv\" (UniqueName: \"kubernetes.io/projected/65012584-29ae-4c06-9cd0-e30a86d7ceca-kube-api-access-4csbv\") pod \"nova-operator-controller-manager-55d49b7dd5-fp7ct\" (UID: \"65012584-29ae-4c06-9cd0-e30a86d7ceca\") " pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.143237 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.152067 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.167353 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.173802 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn545\" (UniqueName: \"kubernetes.io/projected/730865cc-5b68-4c45-927b-8a5fee90c539-kube-api-access-rn545\") pod \"neutron-operator-controller-manager-78d58447c5-c44sh\" (UID: \"730865cc-5b68-4c45-927b-8a5fee90c539\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.181346 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trkk2\" (UniqueName: \"kubernetes.io/projected/660281b0-2db3-4f96-a8c5-69c0ca0a5072-kube-api-access-trkk2\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-6kwks\" (UID: \"660281b0-2db3-4f96-a8c5-69c0ca0a5072\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.181403 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.182276 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.194542 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wfdbv" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.238273 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.239114 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.244639 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9smf\" (UniqueName: \"kubernetes.io/projected/f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26-kube-api-access-q9smf\") pod \"ovn-operator-controller-manager-6f75f45d54-ll4sm\" (UID: \"f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.244701 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2kz\" (UniqueName: \"kubernetes.io/projected/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-kube-api-access-dg2kz\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.244731 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpwft\" (UniqueName: \"kubernetes.io/projected/a00c19fe-2da2-45ce-81b6-a32c17bbb1e7-kube-api-access-rpwft\") pod \"placement-operator-controller-manager-79d5ccc684-s9l5b\" (UID: \"a00c19fe-2da2-45ce-81b6-a32c17bbb1e7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.244759 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.244782 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmwn\" (UniqueName: \"kubernetes.io/projected/c394fabc-a9e5-4e6b-81bb-511228e8c0fb-kube-api-access-pdmwn\") pod \"octavia-operator-controller-manager-5f4cd88d46-l44l6\" (UID: \"c394fabc-a9e5-4e6b-81bb-511228e8c0fb\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.244804 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4csbv\" (UniqueName: \"kubernetes.io/projected/65012584-29ae-4c06-9cd0-e30a86d7ceca-kube-api-access-4csbv\") pod \"nova-operator-controller-manager-55d49b7dd5-fp7ct\" (UID: \"65012584-29ae-4c06-9cd0-e30a86d7ceca\") " pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.245415 5001 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.245468 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert podName:ee5300e4-6c64-4919-9ac3-1e8a9779abc3 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:24.745451171 +0000 UTC m=+990.913239401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854689x6" (UID: "ee5300e4-6c64-4919-9ac3-1e8a9779abc3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.253398 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ngfns" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.259456 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.284241 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4csbv\" (UniqueName: \"kubernetes.io/projected/65012584-29ae-4c06-9cd0-e30a86d7ceca-kube-api-access-4csbv\") pod \"nova-operator-controller-manager-55d49b7dd5-fp7ct\" (UID: \"65012584-29ae-4c06-9cd0-e30a86d7ceca\") " pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.293768 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2kz\" (UniqueName: \"kubernetes.io/projected/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-kube-api-access-dg2kz\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.301046 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.311964 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmwn\" (UniqueName: \"kubernetes.io/projected/c394fabc-a9e5-4e6b-81bb-511228e8c0fb-kube-api-access-pdmwn\") pod \"octavia-operator-controller-manager-5f4cd88d46-l44l6\" (UID: \"c394fabc-a9e5-4e6b-81bb-511228e8c0fb\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.320048 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.337487 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.338346 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.341486 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wxg6l" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.350811 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.351881 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.352771 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjc6\" (UniqueName: \"kubernetes.io/projected/1ea33ae1-a3ae-4f47-b28d-166e582f8b83-kube-api-access-rkjc6\") pod \"telemetry-operator-controller-manager-85cd9769bb-647dx\" (UID: \"1ea33ae1-a3ae-4f47-b28d-166e582f8b83\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.352837 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9smf\" (UniqueName: \"kubernetes.io/projected/f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26-kube-api-access-q9smf\") pod \"ovn-operator-controller-manager-6f75f45d54-ll4sm\" (UID: \"f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.352885 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpwft\" (UniqueName: \"kubernetes.io/projected/a00c19fe-2da2-45ce-81b6-a32c17bbb1e7-kube-api-access-rpwft\") pod \"placement-operator-controller-manager-79d5ccc684-s9l5b\" (UID: \"a00c19fe-2da2-45ce-81b6-a32c17bbb1e7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.352911 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xspn9\" (UniqueName: \"kubernetes.io/projected/95fa542e-01b1-4cd6-878e-7afba27a9e5f-kube-api-access-xspn9\") pod \"swift-operator-controller-manager-547cbdb99f-92rdm\" (UID: \"95fa542e-01b1-4cd6-878e-7afba27a9e5f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.378904 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9smf\" (UniqueName: \"kubernetes.io/projected/f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26-kube-api-access-q9smf\") pod \"ovn-operator-controller-manager-6f75f45d54-ll4sm\" (UID: \"f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.386463 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpwft\" (UniqueName: \"kubernetes.io/projected/a00c19fe-2da2-45ce-81b6-a32c17bbb1e7-kube-api-access-rpwft\") pod \"placement-operator-controller-manager-79d5ccc684-s9l5b\" (UID: \"a00c19fe-2da2-45ce-81b6-a32c17bbb1e7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.387059 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.387359 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.406595 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-np74j"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.407876 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.411616 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.419480 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jr5tl" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.423342 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.429019 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-np74j"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.443714 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.454336 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.454395 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fh7d\" (UniqueName: \"kubernetes.io/projected/9ab5c237-6fba-4123-bbdd-051d9519d4fa-kube-api-access-2fh7d\") pod \"test-operator-controller-manager-69797bbcbd-djkj7\" (UID: \"9ab5c237-6fba-4123-bbdd-051d9519d4fa\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.454452 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xspn9\" (UniqueName: \"kubernetes.io/projected/95fa542e-01b1-4cd6-878e-7afba27a9e5f-kube-api-access-xspn9\") pod \"swift-operator-controller-manager-547cbdb99f-92rdm\" (UID: \"95fa542e-01b1-4cd6-878e-7afba27a9e5f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.454533 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkjc6\" (UniqueName: \"kubernetes.io/projected/1ea33ae1-a3ae-4f47-b28d-166e582f8b83-kube-api-access-rkjc6\") pod \"telemetry-operator-controller-manager-85cd9769bb-647dx\" (UID: \"1ea33ae1-a3ae-4f47-b28d-166e582f8b83\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.454937 5001 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.455000 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert podName:42016b42-c753-4265-9902-c2969117ad64 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:25.454967121 +0000 UTC m=+991.622755351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert") pod "infra-operator-controller-manager-694cf4f878-jm966" (UID: "42016b42-c753-4265-9902-c2969117ad64") : secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.502368 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkjc6\" (UniqueName: \"kubernetes.io/projected/1ea33ae1-a3ae-4f47-b28d-166e582f8b83-kube-api-access-rkjc6\") pod \"telemetry-operator-controller-manager-85cd9769bb-647dx\" (UID: \"1ea33ae1-a3ae-4f47-b28d-166e582f8b83\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.523489 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.528329 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xspn9\" (UniqueName: \"kubernetes.io/projected/95fa542e-01b1-4cd6-878e-7afba27a9e5f-kube-api-access-xspn9\") pod \"swift-operator-controller-manager-547cbdb99f-92rdm\" (UID: \"95fa542e-01b1-4cd6-878e-7afba27a9e5f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.540247 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.541745 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.548717 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lsrts" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.548958 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.549109 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.555585 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-splhr\" (UniqueName: \"kubernetes.io/projected/38bc9590-ffbd-4924-90aa-c24a44a29bd7-kube-api-access-splhr\") pod \"watcher-operator-controller-manager-564965969-np74j\" (UID: \"38bc9590-ffbd-4924-90aa-c24a44a29bd7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.555679 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fh7d\" (UniqueName: \"kubernetes.io/projected/9ab5c237-6fba-4123-bbdd-051d9519d4fa-kube-api-access-2fh7d\") pod \"test-operator-controller-manager-69797bbcbd-djkj7\" (UID: \"9ab5c237-6fba-4123-bbdd-051d9519d4fa\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.561414 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.563162 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.610878 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fh7d\" (UniqueName: \"kubernetes.io/projected/9ab5c237-6fba-4123-bbdd-051d9519d4fa-kube-api-access-2fh7d\") pod \"test-operator-controller-manager-69797bbcbd-djkj7\" (UID: \"9ab5c237-6fba-4123-bbdd-051d9519d4fa\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.620695 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.624529 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.625330 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.630183 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6gjv8" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.631928 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.638887 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.657633 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpd8x\" (UniqueName: \"kubernetes.io/projected/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-kube-api-access-fpd8x\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.657767 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.657807 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-splhr\" (UniqueName: \"kubernetes.io/projected/38bc9590-ffbd-4924-90aa-c24a44a29bd7-kube-api-access-splhr\") pod \"watcher-operator-controller-manager-564965969-np74j\" (UID: \"38bc9590-ffbd-4924-90aa-c24a44a29bd7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.657832 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.678708 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-splhr\" (UniqueName: \"kubernetes.io/projected/38bc9590-ffbd-4924-90aa-c24a44a29bd7-kube-api-access-splhr\") pod \"watcher-operator-controller-manager-564965969-np74j\" (UID: \"38bc9590-ffbd-4924-90aa-c24a44a29bd7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.696458 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.757494 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.758870 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qg9\" (UniqueName: \"kubernetes.io/projected/95ef1fe7-914c-4c1e-9468-636a81ec6cce-kube-api-access-46qg9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dlw8k\" (UID: \"95ef1fe7-914c-4c1e-9468-636a81ec6cce\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.758938 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpd8x\" (UniqueName: \"kubernetes.io/projected/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-kube-api-access-fpd8x\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.759029 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.759075 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.759124 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.759235 5001 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.759288 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:25.259268597 +0000 UTC m=+991.427056827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "metrics-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.759580 5001 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.759611 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert podName:ee5300e4-6c64-4919-9ac3-1e8a9779abc3 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:25.759602137 +0000 UTC m=+991.927390437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854689x6" (UID: "ee5300e4-6c64-4919-9ac3-1e8a9779abc3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.759626 5001 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: E0128 17:32:24.759691 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:25.259665959 +0000 UTC m=+991.427454199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "webhook-server-cert" not found Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.784335 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpd8x\" (UniqueName: \"kubernetes.io/projected/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-kube-api-access-fpd8x\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.786256 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.800421 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q"] Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.862932 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qg9\" (UniqueName: \"kubernetes.io/projected/95ef1fe7-914c-4c1e-9468-636a81ec6cce-kube-api-access-46qg9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dlw8k\" (UID: \"95ef1fe7-914c-4c1e-9468-636a81ec6cce\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" Jan 28 17:32:24 crc kubenswrapper[5001]: W0128 17:32:24.871730 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44247ccc_08d6_4c04_ae14_7595add07217.slice/crio-79aa202678f214e17c3208c33f31595bb478cf9668518bf27d22fee4f95325b6 WatchSource:0}: Error finding container 79aa202678f214e17c3208c33f31595bb478cf9668518bf27d22fee4f95325b6: Status 404 returned error can't find the container with id 79aa202678f214e17c3208c33f31595bb478cf9668518bf27d22fee4f95325b6 Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.907029 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qg9\" (UniqueName: \"kubernetes.io/projected/95ef1fe7-914c-4c1e-9468-636a81ec6cce-kube-api-access-46qg9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dlw8k\" (UID: \"95ef1fe7-914c-4c1e-9468-636a81ec6cce\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.951343 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" Jan 28 17:32:24 crc kubenswrapper[5001]: I0128 17:32:24.953024 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.007501 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.031575 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.038898 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.114145 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" event={"ID":"44247ccc-08d6-4c04-ae14-7595add07217","Type":"ContainerStarted","Data":"79aa202678f214e17c3208c33f31595bb478cf9668518bf27d22fee4f95325b6"} Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.118943 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" event={"ID":"842efc7d-25a3-4383-9ca0-a3d2e101990a","Type":"ContainerStarted","Data":"d4c400b92b6c494f762bbeb49c0f4014ffdcdd5ff0525dff96bd15fa2752aa40"} Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.124169 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" event={"ID":"f3751f38-96d6-42a8-98da-05cfbd294fb5","Type":"ContainerStarted","Data":"e026e2ba7ba5cb0b5dd6b25136abc73cc3f8f9de921b8d387180c24a1e695912"} Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.126387 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" event={"ID":"cff5af3e-db62-41be-b49c-8df7ea7a015a","Type":"ContainerStarted","Data":"c0fdb38199a7c57415240d4319363b47f17cfd8950fd32cbebabc60c9bdad88f"} Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.147285 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7"] Jan 28 17:32:25 crc kubenswrapper[5001]: W0128 17:32:25.219993 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2049f024_4549_43c1_b3ea_c42b38ade539.slice/crio-6ea72edcd5e72f601c8f7e40b13a715f24b151b71b9a9f1e35c5c6dfa3f3170c WatchSource:0}: Error finding container 6ea72edcd5e72f601c8f7e40b13a715f24b151b71b9a9f1e35c5c6dfa3f3170c: Status 404 returned error can't find the container with id 6ea72edcd5e72f601c8f7e40b13a715f24b151b71b9a9f1e35c5c6dfa3f3170c Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.277657 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.277737 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.277883 5001 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.278210 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:26.278178301 +0000 UTC m=+992.445966541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "webhook-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.277890 5001 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.278618 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:26.278609154 +0000 UTC m=+992.446397384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "metrics-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.480495 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.480681 5001 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.480987 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert podName:42016b42-c753-4265-9902-c2969117ad64 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:27.480950746 +0000 UTC m=+993.648738976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert") pod "infra-operator-controller-manager-694cf4f878-jm966" (UID: "42016b42-c753-4265-9902-c2969117ad64") : secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.537241 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.556105 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.565276 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.574332 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.792594 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.792859 5001 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.792936 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert podName:ee5300e4-6c64-4919-9ac3-1e8a9779abc3 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:27.792918084 +0000 UTC m=+993.960706314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854689x6" (UID: "ee5300e4-6c64-4919-9ac3-1e8a9779abc3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.871088 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.901474 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7"] Jan 28 17:32:25 crc kubenswrapper[5001]: W0128 17:32:25.921035 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d72532d_5aac_40f4_b308_4ac21a287e81.slice/crio-c30e9691a3463003a6cbdcdc2c0b0c117ca1269202c465be60f3895b6ef15663 WatchSource:0}: Error finding container c30e9691a3463003a6cbdcdc2c0b0c117ca1269202c465be60f3895b6ef15663: Status 404 returned error can't find the container with id c30e9691a3463003a6cbdcdc2c0b0c117ca1269202c465be60f3895b6ef15663 Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.921701 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff"] Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.935084 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpwft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-s9l5b_openstack-operators(a00c19fe-2da2-45ce-81b6-a32c17bbb1e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.936202 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" podUID="a00c19fe-2da2-45ce-81b6-a32c17bbb1e7" Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.941837 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.954073 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-np74j"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.959726 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm"] Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.972096 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b"] Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.974627 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xspn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-92rdm_openstack-operators(95fa542e-01b1-4cd6-878e-7afba27a9e5f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 17:32:25 crc kubenswrapper[5001]: W0128 17:32:25.975405 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95ef1fe7_914c_4c1e_9468_636a81ec6cce.slice/crio-53e522cac129a472a4350cee160bab77aeb6f16b397b6ac38cb8401e70c12e44 WatchSource:0}: Error finding container 53e522cac129a472a4350cee160bab77aeb6f16b397b6ac38cb8401e70c12e44: Status 404 returned error can't find the container with id 53e522cac129a472a4350cee160bab77aeb6f16b397b6ac38cb8401e70c12e44 Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.975438 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.975655 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.975963 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" podUID="95fa542e-01b1-4cd6-878e-7afba27a9e5f" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.979395 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46qg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dlw8k_openstack-operators(95ef1fe7-914c-4c1e-9468-636a81ec6cce): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.981259 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" podUID="95ef1fe7-914c-4c1e-9468-636a81ec6cce" Jan 28 17:32:25 crc kubenswrapper[5001]: W0128 17:32:25.985513 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod660281b0_2db3_4f96_a8c5_69c0ca0a5072.slice/crio-d9fc7b63e3f7df63ccc82f96303be376d7020c89f2b8fe435e1c62c69e6df431 WatchSource:0}: Error finding container d9fc7b63e3f7df63ccc82f96303be376d7020c89f2b8fe435e1c62c69e6df431: Status 404 returned error can't find the container with id d9fc7b63e3f7df63ccc82f96303be376d7020c89f2b8fe435e1c62c69e6df431 Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.986730 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k"] Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.987575 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkjc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-647dx_openstack-operators(1ea33ae1-a3ae-4f47-b28d-166e582f8b83): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.990066 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" podUID="1ea33ae1-a3ae-4f47-b28d-166e582f8b83" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.991760 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trkk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-6kwks_openstack-operators(660281b0-2db3-4f96-a8c5-69c0ca0a5072): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 17:32:25 crc kubenswrapper[5001]: E0128 17:32:25.992862 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" podUID="660281b0-2db3-4f96-a8c5-69c0ca0a5072" Jan 28 17:32:25 crc kubenswrapper[5001]: I0128 17:32:25.993046 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6"] Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.043673 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.136142 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" event={"ID":"1f3f3b33-d586-448c-a967-fcd03c6fb11d","Type":"ContainerStarted","Data":"db57f8f27a477d2d2890e0274cdba7f86527b526c2adcef4a0eb4004545e479a"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.137425 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" event={"ID":"f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26","Type":"ContainerStarted","Data":"f3b612cedbd08d9cb928df38b4ab2cd4d70a714300cc115b4beabd40abff9c8c"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.140343 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" event={"ID":"1ea33ae1-a3ae-4f47-b28d-166e582f8b83","Type":"ContainerStarted","Data":"e888577384cc00db665f5904b1036f9f46cbe2341dae7f55a6fe5e740eed57a0"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.143125 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" event={"ID":"a00c19fe-2da2-45ce-81b6-a32c17bbb1e7","Type":"ContainerStarted","Data":"36c3977fad1d64e37d681af2616f09808e739c443529b44fa104677e5c4a7ad0"} Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.143152 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" podUID="1ea33ae1-a3ae-4f47-b28d-166e582f8b83" Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.148124 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" podUID="a00c19fe-2da2-45ce-81b6-a32c17bbb1e7" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.151362 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" event={"ID":"660281b0-2db3-4f96-a8c5-69c0ca0a5072","Type":"ContainerStarted","Data":"d9fc7b63e3f7df63ccc82f96303be376d7020c89f2b8fe435e1c62c69e6df431"} Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.156174 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" podUID="660281b0-2db3-4f96-a8c5-69c0ca0a5072" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.157829 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" event={"ID":"c394fabc-a9e5-4e6b-81bb-511228e8c0fb","Type":"ContainerStarted","Data":"242d1906463fd3b6e5fa01d2508ed40859d4892a53930002a5cd6eaa8ef4b754"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.160461 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" event={"ID":"2049f024-4549-43c1-b3ea-c42b38ade539","Type":"ContainerStarted","Data":"6ea72edcd5e72f601c8f7e40b13a715f24b151b71b9a9f1e35c5c6dfa3f3170c"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.163457 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" event={"ID":"38bc9590-ffbd-4924-90aa-c24a44a29bd7","Type":"ContainerStarted","Data":"3e076bca888c5980e90ff783f2ba5a186f31a1a3a1c77c0cc6466cfa94bea141"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.171225 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" event={"ID":"730865cc-5b68-4c45-927b-8a5fee90c539","Type":"ContainerStarted","Data":"3c7b260dc94b1d60c935a8ae60285f807572c97432964477985499ee9d98a8dd"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.173421 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" event={"ID":"95fa542e-01b1-4cd6-878e-7afba27a9e5f","Type":"ContainerStarted","Data":"4c1e90462c8f3926bd5271e405a81cca3b587fd83a89e5b2eb1a7ebf6bcbded6"} Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.174765 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" podUID="95fa542e-01b1-4cd6-878e-7afba27a9e5f" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.178122 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" event={"ID":"9ab5c237-6fba-4123-bbdd-051d9519d4fa","Type":"ContainerStarted","Data":"232531c6b454457dc29d50c89d95aa762e940b7f27aad38b8cac545d986619dc"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.180330 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" event={"ID":"0311571f-c23c-4554-8763-a3daced65fc8","Type":"ContainerStarted","Data":"7bc01db80c859de39001736b4e01843258305fd1cf0ced2a718dcff20b7a8cee"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.181524 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" event={"ID":"0d72532d-5aac-40f4-b308-4ac21a287e81","Type":"ContainerStarted","Data":"c30e9691a3463003a6cbdcdc2c0b0c117ca1269202c465be60f3895b6ef15663"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.182698 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" event={"ID":"77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d","Type":"ContainerStarted","Data":"ae040929479c03421c6c30ed3336e7f86464cefaf3f3d410f69b8cd820ed0b1d"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.186373 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" event={"ID":"95ef1fe7-914c-4c1e-9468-636a81ec6cce","Type":"ContainerStarted","Data":"53e522cac129a472a4350cee160bab77aeb6f16b397b6ac38cb8401e70c12e44"} Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.187266 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" event={"ID":"65012584-29ae-4c06-9cd0-e30a86d7ceca","Type":"ContainerStarted","Data":"531088ac753040b27b9fb8f5a507783c6b42370446fbd664450c22ff69ecd78e"} Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.188890 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" podUID="95ef1fe7-914c-4c1e-9468-636a81ec6cce" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.273442 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bb6js" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.310465 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.310534 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.310726 5001 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.310797 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:28.310776837 +0000 UTC m=+994.478565067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "metrics-server-cert" not found Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.311213 5001 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 17:32:26 crc kubenswrapper[5001]: E0128 17:32:26.311275 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:28.311262121 +0000 UTC m=+994.479050351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "webhook-server-cert" not found Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.355118 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bb6js"] Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.393294 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ndsd8"] Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.393731 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ndsd8" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="registry-server" containerID="cri-o://fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000" gracePeriod=2 Jan 28 17:32:26 crc kubenswrapper[5001]: I0128 17:32:26.905774 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.046423 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-catalog-content\") pod \"01a6f242-b936-4752-b868-ebffda3b8657\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.046485 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp4hb\" (UniqueName: \"kubernetes.io/projected/01a6f242-b936-4752-b868-ebffda3b8657-kube-api-access-mp4hb\") pod \"01a6f242-b936-4752-b868-ebffda3b8657\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.046613 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-utilities\") pod \"01a6f242-b936-4752-b868-ebffda3b8657\" (UID: \"01a6f242-b936-4752-b868-ebffda3b8657\") " Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.048475 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-utilities" (OuterVolumeSpecName: "utilities") pod "01a6f242-b936-4752-b868-ebffda3b8657" (UID: "01a6f242-b936-4752-b868-ebffda3b8657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.072487 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a6f242-b936-4752-b868-ebffda3b8657-kube-api-access-mp4hb" (OuterVolumeSpecName: "kube-api-access-mp4hb") pod "01a6f242-b936-4752-b868-ebffda3b8657" (UID: "01a6f242-b936-4752-b868-ebffda3b8657"). InnerVolumeSpecName "kube-api-access-mp4hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.127384 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01a6f242-b936-4752-b868-ebffda3b8657" (UID: "01a6f242-b936-4752-b868-ebffda3b8657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.151397 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.151428 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a6f242-b936-4752-b868-ebffda3b8657-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.151438 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp4hb\" (UniqueName: \"kubernetes.io/projected/01a6f242-b936-4752-b868-ebffda3b8657-kube-api-access-mp4hb\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.204069 5001 generic.go:334] "Generic (PLEG): container finished" podID="01a6f242-b936-4752-b868-ebffda3b8657" containerID="fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000" exitCode=0 Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.204133 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndsd8" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.204189 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerDied","Data":"fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000"} Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.204239 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndsd8" event={"ID":"01a6f242-b936-4752-b868-ebffda3b8657","Type":"ContainerDied","Data":"5cdefeae01218ad0229f1a5ee32a5f22cdc39f088b4d53379802685487a10cf7"} Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.204259 5001 scope.go:117] "RemoveContainer" containerID="fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.211253 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" podUID="a00c19fe-2da2-45ce-81b6-a32c17bbb1e7" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.211270 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" podUID="1ea33ae1-a3ae-4f47-b28d-166e582f8b83" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.211258 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" podUID="95ef1fe7-914c-4c1e-9468-636a81ec6cce" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.211258 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" podUID="660281b0-2db3-4f96-a8c5-69c0ca0a5072" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.211589 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" podUID="95fa542e-01b1-4cd6-878e-7afba27a9e5f" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.237518 5001 scope.go:117] "RemoveContainer" containerID="8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.327262 5001 scope.go:117] "RemoveContainer" containerID="6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.389492 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ndsd8"] Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.429366 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ndsd8"] Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.460191 5001 scope.go:117] "RemoveContainer" containerID="fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.470146 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000\": container with ID starting with fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000 not found: ID does not exist" containerID="fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.470207 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000"} err="failed to get container status \"fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000\": rpc error: code = NotFound desc = could not find container \"fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000\": container with ID starting with fce7bf2f1d31f051ffc72ac0cfe74907bcaff16f4b778b469b5d753610fe9000 not found: ID does not exist" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.470237 5001 scope.go:117] "RemoveContainer" containerID="8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.474097 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073\": container with ID starting with 8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073 not found: ID does not exist" containerID="8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.474133 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073"} err="failed to get container status \"8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073\": rpc error: code = NotFound desc = could not find container \"8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073\": container with ID starting with 8bc1818c21a05c19ac7e6e525c161aa7e7844fc34a12c38dd248d2fad596a073 not found: ID does not exist" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.474153 5001 scope.go:117] "RemoveContainer" containerID="6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.484376 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952\": container with ID starting with 6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952 not found: ID does not exist" containerID="6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.484424 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952"} err="failed to get container status \"6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952\": rpc error: code = NotFound desc = could not find container \"6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952\": container with ID starting with 6cac40fe6d97c6a5b0f0e430996cf2668cb3aa2a08ab68e2e124442088434952 not found: ID does not exist" Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.564702 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.564892 5001 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.564949 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert podName:42016b42-c753-4265-9902-c2969117ad64 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:31.564930712 +0000 UTC m=+997.732718942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert") pod "infra-operator-controller-manager-694cf4f878-jm966" (UID: "42016b42-c753-4265-9902-c2969117ad64") : secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:27 crc kubenswrapper[5001]: I0128 17:32:27.869842 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.870139 5001 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:27 crc kubenswrapper[5001]: E0128 17:32:27.870507 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert podName:ee5300e4-6c64-4919-9ac3-1e8a9779abc3 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:31.870486915 +0000 UTC m=+998.038275145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854689x6" (UID: "ee5300e4-6c64-4919-9ac3-1e8a9779abc3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:28 crc kubenswrapper[5001]: I0128 17:32:28.376533 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:28 crc kubenswrapper[5001]: I0128 17:32:28.376627 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:28 crc kubenswrapper[5001]: E0128 17:32:28.376709 5001 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 17:32:28 crc kubenswrapper[5001]: E0128 17:32:28.376783 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:32.376760712 +0000 UTC m=+998.544548932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "webhook-server-cert" not found Jan 28 17:32:28 crc kubenswrapper[5001]: E0128 17:32:28.376858 5001 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 17:32:28 crc kubenswrapper[5001]: E0128 17:32:28.376956 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:32.376929497 +0000 UTC m=+998.544717767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "metrics-server-cert" not found Jan 28 17:32:28 crc kubenswrapper[5001]: I0128 17:32:28.607699 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a6f242-b936-4752-b868-ebffda3b8657" path="/var/lib/kubelet/pods/01a6f242-b936-4752-b868-ebffda3b8657/volumes" Jan 28 17:32:31 crc kubenswrapper[5001]: I0128 17:32:31.630373 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:31 crc kubenswrapper[5001]: E0128 17:32:31.630896 5001 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:31 crc kubenswrapper[5001]: E0128 17:32:31.630950 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert podName:42016b42-c753-4265-9902-c2969117ad64 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:39.630932092 +0000 UTC m=+1005.798720322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert") pod "infra-operator-controller-manager-694cf4f878-jm966" (UID: "42016b42-c753-4265-9902-c2969117ad64") : secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:31 crc kubenswrapper[5001]: I0128 17:32:31.935011 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:31 crc kubenswrapper[5001]: E0128 17:32:31.935224 5001 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:31 crc kubenswrapper[5001]: E0128 17:32:31.935283 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert podName:ee5300e4-6c64-4919-9ac3-1e8a9779abc3 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:39.935264659 +0000 UTC m=+1006.103052889 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854689x6" (UID: "ee5300e4-6c64-4919-9ac3-1e8a9779abc3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:32 crc kubenswrapper[5001]: I0128 17:32:32.441327 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:32 crc kubenswrapper[5001]: I0128 17:32:32.441483 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:32 crc kubenswrapper[5001]: E0128 17:32:32.441534 5001 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 17:32:32 crc kubenswrapper[5001]: E0128 17:32:32.441628 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:40.441604918 +0000 UTC m=+1006.609393208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "webhook-server-cert" not found Jan 28 17:32:32 crc kubenswrapper[5001]: E0128 17:32:32.441662 5001 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 17:32:32 crc kubenswrapper[5001]: E0128 17:32:32.441745 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:40.441726932 +0000 UTC m=+1006.609515162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "metrics-server-cert" not found Jan 28 17:32:34 crc kubenswrapper[5001]: I0128 17:32:34.834173 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:32:34 crc kubenswrapper[5001]: I0128 17:32:34.835511 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:32:38 crc kubenswrapper[5001]: E0128 17:32:38.495833 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 28 17:32:38 crc kubenswrapper[5001]: E0128 17:32:38.496360 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rn545,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-c44sh_openstack-operators(730865cc-5b68-4c45-927b-8a5fee90c539): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:38 crc kubenswrapper[5001]: E0128 17:32:38.497535 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" podUID="730865cc-5b68-4c45-927b-8a5fee90c539" Jan 28 17:32:39 crc kubenswrapper[5001]: E0128 17:32:39.303236 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" podUID="730865cc-5b68-4c45-927b-8a5fee90c539" Jan 28 17:32:39 crc kubenswrapper[5001]: I0128 17:32:39.686913 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:39 crc kubenswrapper[5001]: E0128 17:32:39.687905 5001 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:39 crc kubenswrapper[5001]: E0128 17:32:39.687968 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert podName:42016b42-c753-4265-9902-c2969117ad64 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:55.687951448 +0000 UTC m=+1021.855739688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert") pod "infra-operator-controller-manager-694cf4f878-jm966" (UID: "42016b42-c753-4265-9902-c2969117ad64") : secret "infra-operator-webhook-server-cert" not found Jan 28 17:32:39 crc kubenswrapper[5001]: I0128 17:32:39.990513 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:39 crc kubenswrapper[5001]: E0128 17:32:39.990635 5001 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:39 crc kubenswrapper[5001]: E0128 17:32:39.990998 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert podName:ee5300e4-6c64-4919-9ac3-1e8a9779abc3 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:55.990964887 +0000 UTC m=+1022.158753117 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854689x6" (UID: "ee5300e4-6c64-4919-9ac3-1e8a9779abc3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.227551 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.227753 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lhgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-p6lff_openstack-operators(0d72532d-5aac-40f4-b308-4ac21a287e81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.229481 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" podUID="0d72532d-5aac-40f4-b308-4ac21a287e81" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.310036 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" podUID="0d72532d-5aac-40f4-b308-4ac21a287e81" Jan 28 17:32:40 crc kubenswrapper[5001]: I0128 17:32:40.497193 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:40 crc kubenswrapper[5001]: I0128 17:32:40.497275 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.497477 5001 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.497531 5001 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.497539 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:56.497522662 +0000 UTC m=+1022.665310892 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "webhook-server-cert" not found Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.497597 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs podName:73f5ff01-c3cc-4fa1-b265-09a6716a24a5 nodeName:}" failed. No retries permitted until 2026-01-28 17:32:56.497577873 +0000 UTC m=+1022.665366163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs") pod "openstack-operator-controller-manager-556755cfd4-d79zz" (UID: "73f5ff01-c3cc-4fa1-b265-09a6716a24a5") : secret "metrics-server-cert" not found Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.853908 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.46:5001/openstack-k8s-operators/nova-operator:ae086f7a668267f6ecd5ffb5768920bf81c4caba" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.853944 5001 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.46:5001/openstack-k8s-operators/nova-operator:ae086f7a668267f6ecd5ffb5768920bf81c4caba" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.854462 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.46:5001/openstack-k8s-operators/nova-operator:ae086f7a668267f6ecd5ffb5768920bf81c4caba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4csbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55d49b7dd5-fp7ct_openstack-operators(65012584-29ae-4c06-9cd0-e30a86d7ceca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:40 crc kubenswrapper[5001]: E0128 17:32:40.855697 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.342229 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" event={"ID":"77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d","Type":"ContainerStarted","Data":"39f5927444de5e1b16525102fc7b49363b6c7ef23a262938910132ba70483101"} Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.343988 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.354111 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" event={"ID":"0311571f-c23c-4554-8763-a3daced65fc8","Type":"ContainerStarted","Data":"bfda9df3abe9bc85bae0e25eb564374102068b201d0db58b18e0c956ff8c23c9"} Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.354857 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.362821 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" podStartSLOduration=3.081894197 podStartE2EDuration="18.362804541s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.543761246 +0000 UTC m=+991.711549476" lastFinishedPulling="2026-01-28 17:32:40.82467159 +0000 UTC m=+1006.992459820" observedRunningTime="2026-01-28 17:32:41.358240928 +0000 UTC m=+1007.526029188" watchObservedRunningTime="2026-01-28 17:32:41.362804541 +0000 UTC m=+1007.530592771" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.363568 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" event={"ID":"38bc9590-ffbd-4924-90aa-c24a44a29bd7","Type":"ContainerStarted","Data":"3a9bd5e26fb4eb368a0e366769a2526f0ee39306eff697cb4f1618fbe9ab6c0f"} Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.364673 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.367682 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" event={"ID":"44247ccc-08d6-4c04-ae14-7595add07217","Type":"ContainerStarted","Data":"50d14c3a8a4b0fff55c55ffbabef1e31f455b0880686a9acd8011264fa8e7fe4"} Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.367724 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:41 crc kubenswrapper[5001]: E0128 17:32:41.368323 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.46:5001/openstack-k8s-operators/nova-operator:ae086f7a668267f6ecd5ffb5768920bf81c4caba\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.398338 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" podStartSLOduration=2.789258788 podStartE2EDuration="18.398313929s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.219432609 +0000 UTC m=+991.387220839" lastFinishedPulling="2026-01-28 17:32:40.82848775 +0000 UTC m=+1006.996275980" observedRunningTime="2026-01-28 17:32:41.390421781 +0000 UTC m=+1007.558210011" watchObservedRunningTime="2026-01-28 17:32:41.398313929 +0000 UTC m=+1007.566102159" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.420444 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" podStartSLOduration=2.58465123 podStartE2EDuration="18.42042423s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:24.908401378 +0000 UTC m=+991.076189608" lastFinishedPulling="2026-01-28 17:32:40.744174388 +0000 UTC m=+1006.911962608" observedRunningTime="2026-01-28 17:32:41.412120269 +0000 UTC m=+1007.579908499" watchObservedRunningTime="2026-01-28 17:32:41.42042423 +0000 UTC m=+1007.588212460" Jan 28 17:32:41 crc kubenswrapper[5001]: I0128 17:32:41.449507 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" podStartSLOduration=2.551361668 podStartE2EDuration="17.449490752s" podCreationTimestamp="2026-01-28 17:32:24 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.92909864 +0000 UTC m=+992.096886870" lastFinishedPulling="2026-01-28 17:32:40.827227724 +0000 UTC m=+1006.995015954" observedRunningTime="2026-01-28 17:32:41.448685149 +0000 UTC m=+1007.616473389" watchObservedRunningTime="2026-01-28 17:32:41.449490752 +0000 UTC m=+1007.617278982" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.386005 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" event={"ID":"1f3f3b33-d586-448c-a967-fcd03c6fb11d","Type":"ContainerStarted","Data":"22f746bf49f829f7c0cf610513052957061529c399917ec6c1821915c9e98520"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.386075 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.387931 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" event={"ID":"f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26","Type":"ContainerStarted","Data":"36d3ca5e4cf8d7ca597d16a2ba785f4df80b38c2abaa074906fffa92391e4326"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.388352 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.397125 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" event={"ID":"842efc7d-25a3-4383-9ca0-a3d2e101990a","Type":"ContainerStarted","Data":"491af4a4a820d592635519a3671b4f0d7c64783addd7cbc9c490abd58b77deab"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.397937 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.401738 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" event={"ID":"f3751f38-96d6-42a8-98da-05cfbd294fb5","Type":"ContainerStarted","Data":"f86e6bca822c36c795d2baade1edc3d22fd23fbc0986b9d6ea7b6929b42a460a"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.402651 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.407793 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" podStartSLOduration=3.699389697 podStartE2EDuration="19.407778446s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.113543631 +0000 UTC m=+991.281331861" lastFinishedPulling="2026-01-28 17:32:40.82193238 +0000 UTC m=+1006.989720610" observedRunningTime="2026-01-28 17:32:42.405104508 +0000 UTC m=+1008.572892738" watchObservedRunningTime="2026-01-28 17:32:42.407778446 +0000 UTC m=+1008.575566666" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.413605 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" event={"ID":"c394fabc-a9e5-4e6b-81bb-511228e8c0fb","Type":"ContainerStarted","Data":"bdc46e4901912ad025ca59b65eef784c1a0da7ed2fab48fcda5a288b9613b213"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.413690 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.424405 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" event={"ID":"2049f024-4549-43c1-b3ea-c42b38ade539","Type":"ContainerStarted","Data":"66d50082deec4d567f4b83cdfbc9ea4578b74e2837c7dc1471ff973bee4a652d"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.424613 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.427626 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" event={"ID":"cff5af3e-db62-41be-b49c-8df7ea7a015a","Type":"ContainerStarted","Data":"ce8da3676670b8b48b5d1e53279c96cf6fd8e5412576fc5745621348c3ccd56a"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.427821 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.432551 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" podStartSLOduration=3.659426129 podStartE2EDuration="19.432532023s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.050777883 +0000 UTC m=+991.218566113" lastFinishedPulling="2026-01-28 17:32:40.823883777 +0000 UTC m=+1006.991672007" observedRunningTime="2026-01-28 17:32:42.427360683 +0000 UTC m=+1008.595148913" watchObservedRunningTime="2026-01-28 17:32:42.432532023 +0000 UTC m=+1008.600320243" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.435946 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" event={"ID":"9ab5c237-6fba-4123-bbdd-051d9519d4fa","Type":"ContainerStarted","Data":"61cb81d42c8964033a57c11d10cf7904189390e1bbef90a81f249f72655a39cb"} Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.436136 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.454426 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" podStartSLOduration=3.673337583 podStartE2EDuration="19.454408997s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.047835468 +0000 UTC m=+991.215623698" lastFinishedPulling="2026-01-28 17:32:40.828906882 +0000 UTC m=+1006.996695112" observedRunningTime="2026-01-28 17:32:42.452427129 +0000 UTC m=+1008.620215379" watchObservedRunningTime="2026-01-28 17:32:42.454408997 +0000 UTC m=+1008.622197227" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.475854 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" podStartSLOduration=4.209761973 podStartE2EDuration="19.475829997s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.558556844 +0000 UTC m=+991.726345074" lastFinishedPulling="2026-01-28 17:32:40.824624868 +0000 UTC m=+1006.992413098" observedRunningTime="2026-01-28 17:32:42.475693463 +0000 UTC m=+1008.643481693" watchObservedRunningTime="2026-01-28 17:32:42.475829997 +0000 UTC m=+1008.643618227" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.502473 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" podStartSLOduration=3.5486496499999998 podStartE2EDuration="19.502456819s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:24.868587355 +0000 UTC m=+991.036375585" lastFinishedPulling="2026-01-28 17:32:40.822394524 +0000 UTC m=+1006.990182754" observedRunningTime="2026-01-28 17:32:42.502123639 +0000 UTC m=+1008.669911869" watchObservedRunningTime="2026-01-28 17:32:42.502456819 +0000 UTC m=+1008.670245049" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.529519 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" podStartSLOduration=3.925025344 podStartE2EDuration="19.529495942s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.223835527 +0000 UTC m=+991.391623757" lastFinishedPulling="2026-01-28 17:32:40.828306125 +0000 UTC m=+1006.996094355" observedRunningTime="2026-01-28 17:32:42.523485988 +0000 UTC m=+1008.691274218" watchObservedRunningTime="2026-01-28 17:32:42.529495942 +0000 UTC m=+1008.697284172" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.543446 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" podStartSLOduration=3.64559471 podStartE2EDuration="18.543432266s" podCreationTimestamp="2026-01-28 17:32:24 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.923392394 +0000 UTC m=+992.091180624" lastFinishedPulling="2026-01-28 17:32:40.82122996 +0000 UTC m=+1006.989018180" observedRunningTime="2026-01-28 17:32:42.54186579 +0000 UTC m=+1008.709654020" watchObservedRunningTime="2026-01-28 17:32:42.543432266 +0000 UTC m=+1008.711220496" Jan 28 17:32:42 crc kubenswrapper[5001]: I0128 17:32:42.558414 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" podStartSLOduration=4.665807405 podStartE2EDuration="19.558395739s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.934681831 +0000 UTC m=+992.102470051" lastFinishedPulling="2026-01-28 17:32:40.827270155 +0000 UTC m=+1006.995058385" observedRunningTime="2026-01-28 17:32:42.554826846 +0000 UTC m=+1008.722615076" watchObservedRunningTime="2026-01-28 17:32:42.558395739 +0000 UTC m=+1008.726183969" Jan 28 17:32:53 crc kubenswrapper[5001]: I0128 17:32:53.921274 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-pvh7q" Jan 28 17:32:53 crc kubenswrapper[5001]: I0128 17:32:53.939858 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-jvbp8" Jan 28 17:32:53 crc kubenswrapper[5001]: I0128 17:32:53.941315 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jncbk" Jan 28 17:32:53 crc kubenswrapper[5001]: E0128 17:32:53.992358 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 28 17:32:53 crc kubenswrapper[5001]: E0128 17:32:53.992543 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkjc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-647dx_openstack-operators(1ea33ae1-a3ae-4f47-b28d-166e582f8b83): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:53 crc kubenswrapper[5001]: E0128 17:32:53.994654 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" podUID="1ea33ae1-a3ae-4f47-b28d-166e582f8b83" Jan 28 17:32:53 crc kubenswrapper[5001]: I0128 17:32:53.995344 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-rpd8k" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.026884 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dt2m9" Jan 28 17:32:54 crc kubenswrapper[5001]: E0128 17:32:54.135566 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 28 17:32:54 crc kubenswrapper[5001]: E0128 17:32:54.135740 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xspn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-92rdm_openstack-operators(95fa542e-01b1-4cd6-878e-7afba27a9e5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:54 crc kubenswrapper[5001]: E0128 17:32:54.136968 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" podUID="95fa542e-01b1-4cd6-878e-7afba27a9e5f" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.145745 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qrfs7" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.263156 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-cv7jq" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.356866 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-twmcs" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.447522 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-l44l6" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.527512 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ll4sm" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.700479 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-djkj7" Jan 28 17:32:54 crc kubenswrapper[5001]: I0128 17:32:54.760160 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-np74j" Jan 28 17:32:55 crc kubenswrapper[5001]: E0128 17:32:55.036757 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 28 17:32:55 crc kubenswrapper[5001]: E0128 17:32:55.037547 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trkk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-6kwks_openstack-operators(660281b0-2db3-4f96-a8c5-69c0ca0a5072): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:55 crc kubenswrapper[5001]: E0128 17:32:55.040028 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" podUID="660281b0-2db3-4f96-a8c5-69c0ca0a5072" Jan 28 17:32:55 crc kubenswrapper[5001]: I0128 17:32:55.773074 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:55 crc kubenswrapper[5001]: I0128 17:32:55.779899 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42016b42-c753-4265-9902-c2969117ad64-cert\") pod \"infra-operator-controller-manager-694cf4f878-jm966\" (UID: \"42016b42-c753-4265-9902-c2969117ad64\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:55 crc kubenswrapper[5001]: I0128 17:32:55.830110 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jxkbc" Jan 28 17:32:55 crc kubenswrapper[5001]: I0128 17:32:55.838972 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.076363 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.081546 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee5300e4-6c64-4919-9ac3-1e8a9779abc3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854689x6\" (UID: \"ee5300e4-6c64-4919-9ac3-1e8a9779abc3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.271248 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vq7zk" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.280085 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.583220 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.583283 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.591865 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-webhook-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.605256 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f5ff01-c3cc-4fa1-b265-09a6716a24a5-metrics-certs\") pod \"openstack-operator-controller-manager-556755cfd4-d79zz\" (UID: \"73f5ff01-c3cc-4fa1-b265-09a6716a24a5\") " pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:56 crc kubenswrapper[5001]: E0128 17:32:56.647431 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 17:32:56 crc kubenswrapper[5001]: E0128 17:32:56.647644 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46qg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dlw8k_openstack-operators(95ef1fe7-914c-4c1e-9468-636a81ec6cce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:32:56 crc kubenswrapper[5001]: E0128 17:32:56.648872 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" podUID="95ef1fe7-914c-4c1e-9468-636a81ec6cce" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.722724 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lsrts" Jan 28 17:32:56 crc kubenswrapper[5001]: I0128 17:32:56.731363 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:57 crc kubenswrapper[5001]: I0128 17:32:57.880734 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz"] Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.041418 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6"] Jan 28 17:32:58 crc kubenswrapper[5001]: W0128 17:32:58.044063 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5300e4_6c64_4919_9ac3_1e8a9779abc3.slice/crio-b0b2e6fc7861b67d5365dd28953b12cc7c363a46e4b6bbca1d09df1481afb4f6 WatchSource:0}: Error finding container b0b2e6fc7861b67d5365dd28953b12cc7c363a46e4b6bbca1d09df1481afb4f6: Status 404 returned error can't find the container with id b0b2e6fc7861b67d5365dd28953b12cc7c363a46e4b6bbca1d09df1481afb4f6 Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.054966 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-jm966"] Jan 28 17:32:58 crc kubenswrapper[5001]: W0128 17:32:58.079485 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42016b42_c753_4265_9902_c2969117ad64.slice/crio-1ca4dd7f23c573a63ab91e229a8fd01d235b553509d90be2d9e64ed32b557fdb WatchSource:0}: Error finding container 1ca4dd7f23c573a63ab91e229a8fd01d235b553509d90be2d9e64ed32b557fdb: Status 404 returned error can't find the container with id 1ca4dd7f23c573a63ab91e229a8fd01d235b553509d90be2d9e64ed32b557fdb Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.577498 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" event={"ID":"730865cc-5b68-4c45-927b-8a5fee90c539","Type":"ContainerStarted","Data":"ab425934036b2619c42f994ebc95dc9d4a2a0cf1df9463dcf23e8e79ca7987cb"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.577719 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.579173 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" event={"ID":"65012584-29ae-4c06-9cd0-e30a86d7ceca","Type":"ContainerStarted","Data":"df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.579363 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.581335 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" event={"ID":"ee5300e4-6c64-4919-9ac3-1e8a9779abc3","Type":"ContainerStarted","Data":"b0b2e6fc7861b67d5365dd28953b12cc7c363a46e4b6bbca1d09df1481afb4f6"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.584611 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" event={"ID":"a00c19fe-2da2-45ce-81b6-a32c17bbb1e7","Type":"ContainerStarted","Data":"bc90933c84c5775255b2850bc12b66b0176495b41573637942d81bcd79fba825"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.584780 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.586432 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" event={"ID":"73f5ff01-c3cc-4fa1-b265-09a6716a24a5","Type":"ContainerStarted","Data":"ac44f666b0536ab206b8ca1939c4e83e29c8fa53c0d87bdc29ea3351356654f8"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.586482 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" event={"ID":"73f5ff01-c3cc-4fa1-b265-09a6716a24a5","Type":"ContainerStarted","Data":"4e1d2f70ec00e7d85c326faf2da3a9d2588c06447b614b4fa9ebf5341c998e5d"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.586556 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.587640 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" event={"ID":"42016b42-c753-4265-9902-c2969117ad64","Type":"ContainerStarted","Data":"1ca4dd7f23c573a63ab91e229a8fd01d235b553509d90be2d9e64ed32b557fdb"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.589554 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" event={"ID":"0d72532d-5aac-40f4-b308-4ac21a287e81","Type":"ContainerStarted","Data":"74738b793a917037ca94ca89c81146afe51657d33cfda24d7defb1d351fef85b"} Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.589836 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.610490 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" podStartSLOduration=3.447100336 podStartE2EDuration="35.610466586s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.601130638 +0000 UTC m=+991.768918868" lastFinishedPulling="2026-01-28 17:32:57.764496898 +0000 UTC m=+1023.932285118" observedRunningTime="2026-01-28 17:32:58.604664588 +0000 UTC m=+1024.772452828" watchObservedRunningTime="2026-01-28 17:32:58.610466586 +0000 UTC m=+1024.778254816" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.623701 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" podStartSLOduration=3.79930445 podStartE2EDuration="35.623681399s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.934900638 +0000 UTC m=+992.102688868" lastFinishedPulling="2026-01-28 17:32:57.759277587 +0000 UTC m=+1023.927065817" observedRunningTime="2026-01-28 17:32:58.622568427 +0000 UTC m=+1024.790356657" watchObservedRunningTime="2026-01-28 17:32:58.623681399 +0000 UTC m=+1024.791469629" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.641768 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" podStartSLOduration=3.811609797 podStartE2EDuration="35.641751983s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.934231438 +0000 UTC m=+992.102019668" lastFinishedPulling="2026-01-28 17:32:57.764373624 +0000 UTC m=+1023.932161854" observedRunningTime="2026-01-28 17:32:58.637274303 +0000 UTC m=+1024.805062543" watchObservedRunningTime="2026-01-28 17:32:58.641751983 +0000 UTC m=+1024.809540213" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.663486 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" podStartSLOduration=34.663471042 podStartE2EDuration="34.663471042s" podCreationTimestamp="2026-01-28 17:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:32:58.660574128 +0000 UTC m=+1024.828362368" watchObservedRunningTime="2026-01-28 17:32:58.663471042 +0000 UTC m=+1024.831259272" Jan 28 17:32:58 crc kubenswrapper[5001]: I0128 17:32:58.685221 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" podStartSLOduration=3.473848562 podStartE2EDuration="35.685200282s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.567401841 +0000 UTC m=+991.735190071" lastFinishedPulling="2026-01-28 17:32:57.778753561 +0000 UTC m=+1023.946541791" observedRunningTime="2026-01-28 17:32:58.68306627 +0000 UTC m=+1024.850854500" watchObservedRunningTime="2026-01-28 17:32:58.685200282 +0000 UTC m=+1024.852988512" Jan 28 17:33:01 crc kubenswrapper[5001]: I0128 17:33:01.611037 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" event={"ID":"42016b42-c753-4265-9902-c2969117ad64","Type":"ContainerStarted","Data":"cfafef0892b558c466bc0af8324b46e85efe8baa5b3efd30400f08c386fbfaf9"} Jan 28 17:33:01 crc kubenswrapper[5001]: I0128 17:33:01.612291 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:33:01 crc kubenswrapper[5001]: I0128 17:33:01.613597 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" event={"ID":"ee5300e4-6c64-4919-9ac3-1e8a9779abc3","Type":"ContainerStarted","Data":"c124bdef74b1e90c67875a2afe0328aadaa8c5bbf435091ad0880ee454f13289"} Jan 28 17:33:01 crc kubenswrapper[5001]: I0128 17:33:01.614001 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:33:01 crc kubenswrapper[5001]: I0128 17:33:01.633585 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" podStartSLOduration=35.816586699 podStartE2EDuration="38.633563581s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:58.084418967 +0000 UTC m=+1024.252207197" lastFinishedPulling="2026-01-28 17:33:00.901395849 +0000 UTC m=+1027.069184079" observedRunningTime="2026-01-28 17:33:01.628135414 +0000 UTC m=+1027.795923644" watchObservedRunningTime="2026-01-28 17:33:01.633563581 +0000 UTC m=+1027.801351811" Jan 28 17:33:01 crc kubenswrapper[5001]: I0128 17:33:01.661660 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" podStartSLOduration=35.801681148 podStartE2EDuration="38.661639155s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:58.048071574 +0000 UTC m=+1024.215859804" lastFinishedPulling="2026-01-28 17:33:00.908029581 +0000 UTC m=+1027.075817811" observedRunningTime="2026-01-28 17:33:01.65663794 +0000 UTC m=+1027.824426180" watchObservedRunningTime="2026-01-28 17:33:01.661639155 +0000 UTC m=+1027.829427385" Jan 28 17:33:04 crc kubenswrapper[5001]: I0128 17:33:04.389269 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-p6lff" Jan 28 17:33:04 crc kubenswrapper[5001]: I0128 17:33:04.390951 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c44sh" Jan 28 17:33:04 crc kubenswrapper[5001]: I0128 17:33:04.430315 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:33:04 crc kubenswrapper[5001]: I0128 17:33:04.565259 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-s9l5b" Jan 28 17:33:04 crc kubenswrapper[5001]: I0128 17:33:04.834149 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:33:04 crc kubenswrapper[5001]: I0128 17:33:04.834203 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:33:06 crc kubenswrapper[5001]: I0128 17:33:06.285639 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854689x6" Jan 28 17:33:06 crc kubenswrapper[5001]: E0128 17:33:06.595335 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" podUID="660281b0-2db3-4f96-a8c5-69c0ca0a5072" Jan 28 17:33:06 crc kubenswrapper[5001]: I0128 17:33:06.739484 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-556755cfd4-d79zz" Jan 28 17:33:08 crc kubenswrapper[5001]: E0128 17:33:08.595673 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" podUID="95fa542e-01b1-4cd6-878e-7afba27a9e5f" Jan 28 17:33:09 crc kubenswrapper[5001]: E0128 17:33:09.595340 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" podUID="1ea33ae1-a3ae-4f47-b28d-166e582f8b83" Jan 28 17:33:10 crc kubenswrapper[5001]: E0128 17:33:10.596135 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" podUID="95ef1fe7-914c-4c1e-9468-636a81ec6cce" Jan 28 17:33:15 crc kubenswrapper[5001]: I0128 17:33:15.850140 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-jm966" Jan 28 17:33:21 crc kubenswrapper[5001]: I0128 17:33:21.746225 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" event={"ID":"95fa542e-01b1-4cd6-878e-7afba27a9e5f","Type":"ContainerStarted","Data":"d5f5b5369b8dc295f97fdf8ea162604b88fdb2960926d6131f28fa8df3d3c0ac"} Jan 28 17:33:21 crc kubenswrapper[5001]: I0128 17:33:21.746939 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:33:21 crc kubenswrapper[5001]: I0128 17:33:21.747573 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" event={"ID":"660281b0-2db3-4f96-a8c5-69c0ca0a5072","Type":"ContainerStarted","Data":"65c915895c3089ce921c69c2aeefa121737a580e1de0f42ff36c1388be3e77ca"} Jan 28 17:33:21 crc kubenswrapper[5001]: I0128 17:33:21.748017 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:33:21 crc kubenswrapper[5001]: I0128 17:33:21.761892 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" podStartSLOduration=3.647569865 podStartE2EDuration="58.761872074s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.973885457 +0000 UTC m=+992.141673687" lastFinishedPulling="2026-01-28 17:33:21.088187666 +0000 UTC m=+1047.255975896" observedRunningTime="2026-01-28 17:33:21.759402673 +0000 UTC m=+1047.927190913" watchObservedRunningTime="2026-01-28 17:33:21.761872074 +0000 UTC m=+1047.929660304" Jan 28 17:33:22 crc kubenswrapper[5001]: I0128 17:33:22.757694 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" event={"ID":"1ea33ae1-a3ae-4f47-b28d-166e582f8b83","Type":"ContainerStarted","Data":"b61bc367dfa5d904d9af171a0acf175624468f353b46b3e6a8ad926f9f98f1c6"} Jan 28 17:33:22 crc kubenswrapper[5001]: I0128 17:33:22.758242 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:33:22 crc kubenswrapper[5001]: I0128 17:33:22.759686 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" event={"ID":"95ef1fe7-914c-4c1e-9468-636a81ec6cce","Type":"ContainerStarted","Data":"42c8530c5254606c244a98e410dd5b1d46432d87bb56d9c3a57127f21b77fac5"} Jan 28 17:33:22 crc kubenswrapper[5001]: I0128 17:33:22.776010 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" podStartSLOduration=4.676758223 podStartE2EDuration="59.775988775s" podCreationTimestamp="2026-01-28 17:32:23 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.989942203 +0000 UTC m=+992.157730433" lastFinishedPulling="2026-01-28 17:33:21.089172755 +0000 UTC m=+1047.256960985" observedRunningTime="2026-01-28 17:33:21.77206941 +0000 UTC m=+1047.939857640" watchObservedRunningTime="2026-01-28 17:33:22.775988775 +0000 UTC m=+1048.943777005" Jan 28 17:33:22 crc kubenswrapper[5001]: I0128 17:33:22.777562 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" podStartSLOduration=2.6680520789999997 podStartE2EDuration="58.777556191s" podCreationTimestamp="2026-01-28 17:32:24 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.98743039 +0000 UTC m=+992.155218620" lastFinishedPulling="2026-01-28 17:33:22.096934502 +0000 UTC m=+1048.264722732" observedRunningTime="2026-01-28 17:33:22.77373243 +0000 UTC m=+1048.941520660" watchObservedRunningTime="2026-01-28 17:33:22.777556191 +0000 UTC m=+1048.945344421" Jan 28 17:33:22 crc kubenswrapper[5001]: I0128 17:33:22.797133 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dlw8k" podStartSLOduration=2.678169912 podStartE2EDuration="58.797110927s" podCreationTimestamp="2026-01-28 17:32:24 +0000 UTC" firstStartedPulling="2026-01-28 17:32:25.979221542 +0000 UTC m=+992.147009772" lastFinishedPulling="2026-01-28 17:33:22.098162557 +0000 UTC m=+1048.265950787" observedRunningTime="2026-01-28 17:33:22.793334738 +0000 UTC m=+1048.961122968" watchObservedRunningTime="2026-01-28 17:33:22.797110927 +0000 UTC m=+1048.964899157" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.412897 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-6kwks" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.623873 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-92rdm" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.642551 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-647dx" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.833954 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.834037 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.834085 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.834811 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64823f4f0cdb758673ce03dbcf6563ab253f05451ac978f41bb7afc046d56ae8"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:33:34 crc kubenswrapper[5001]: I0128 17:33:34.834890 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://64823f4f0cdb758673ce03dbcf6563ab253f05451ac978f41bb7afc046d56ae8" gracePeriod=600 Jan 28 17:33:35 crc kubenswrapper[5001]: I0128 17:33:35.845367 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="64823f4f0cdb758673ce03dbcf6563ab253f05451ac978f41bb7afc046d56ae8" exitCode=0 Jan 28 17:33:35 crc kubenswrapper[5001]: I0128 17:33:35.845664 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"64823f4f0cdb758673ce03dbcf6563ab253f05451ac978f41bb7afc046d56ae8"} Jan 28 17:33:35 crc kubenswrapper[5001]: I0128 17:33:35.845690 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"ccc5cb4e707a79570ebb35140ac6a6c78fccff38dad2a94d8294b2b7a155b3e0"} Jan 28 17:33:35 crc kubenswrapper[5001]: I0128 17:33:35.845708 5001 scope.go:117] "RemoveContainer" containerID="bcd1bb3eeb7df10e3aeb349c79594cb4ff827106a1c1b316d9b57fdb098e8eef" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.259840 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 28 17:33:45 crc kubenswrapper[5001]: E0128 17:33:45.260705 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="extract-utilities" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.260720 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="extract-utilities" Jan 28 17:33:45 crc kubenswrapper[5001]: E0128 17:33:45.260742 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="registry-server" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.260750 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="registry-server" Jan 28 17:33:45 crc kubenswrapper[5001]: E0128 17:33:45.260766 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="extract-content" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.260772 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="extract-content" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.260918 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a6f242-b936-4752-b868-ebffda3b8657" containerName="registry-server" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.261771 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.263929 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.264468 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-dhxrv" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.264734 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.264926 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.265058 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.265156 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.266261 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.281335 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.334911 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44jh\" (UniqueName: \"kubernetes.io/projected/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-kube-api-access-q44jh\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.334988 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335023 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335155 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335303 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335322 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335434 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335540 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.335571 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436568 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44jh\" (UniqueName: \"kubernetes.io/projected/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-kube-api-access-q44jh\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436618 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436644 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436670 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436722 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436742 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436782 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436818 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.436839 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.437699 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.437837 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.438215 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.438461 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.439365 5001 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.439393 5001 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f72a2b43bcdae99cb49c7b8f04ca5a4d630471bc91fa1138552091d47c497621/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.442873 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.442875 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.449953 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.455123 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44jh\" (UniqueName: \"kubernetes.io/projected/420bd810-85d0-4ced-bcd8-3ae62c8c79e4-kube-api-access-q44jh\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.465887 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56b697d0-940d-409c-a562-692d0f6d3bd7\") pod \"rabbitmq-server-0\" (UID: \"420bd810-85d0-4ced-bcd8-3ae62c8c79e4\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.627830 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.642616 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.644222 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.646128 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-pp6gf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.647344 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.647537 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.647736 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.647870 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.662164 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.743931 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bdcddc03-22a9-44af-8c79-67fe309358ec-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744166 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744199 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bdcddc03-22a9-44af-8c79-67fe309358ec-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744224 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744279 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdr7v\" (UniqueName: \"kubernetes.io/projected/bdcddc03-22a9-44af-8c79-67fe309358ec-kube-api-access-gdr7v\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744299 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bdcddc03-22a9-44af-8c79-67fe309358ec-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744314 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bdcddc03-22a9-44af-8c79-67fe309358ec-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744340 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.744376 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.845916 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bdcddc03-22a9-44af-8c79-67fe309358ec-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846520 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bdcddc03-22a9-44af-8c79-67fe309358ec-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846584 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846631 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846707 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bdcddc03-22a9-44af-8c79-67fe309358ec-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846750 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846790 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bdcddc03-22a9-44af-8c79-67fe309358ec-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846820 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.846861 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdr7v\" (UniqueName: \"kubernetes.io/projected/bdcddc03-22a9-44af-8c79-67fe309358ec-kube-api-access-gdr7v\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.849032 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bdcddc03-22a9-44af-8c79-67fe309358ec-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.850352 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.850675 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bdcddc03-22a9-44af-8c79-67fe309358ec-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.853626 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.855888 5001 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.855942 5001 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/33bbbe90efef12a58b200a3eb37fd74fee293bda2474fc63859613d837a1bf13/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.856240 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bdcddc03-22a9-44af-8c79-67fe309358ec-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.864561 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bdcddc03-22a9-44af-8c79-67fe309358ec-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.867516 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bdcddc03-22a9-44af-8c79-67fe309358ec-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.874580 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdr7v\" (UniqueName: \"kubernetes.io/projected/bdcddc03-22a9-44af-8c79-67fe309358ec-kube-api-access-gdr7v\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.906950 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0827f3f2-af79-44ec-ac05-faa64be4db1c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"bdcddc03-22a9-44af-8c79-67fe309358ec\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.944234 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.945632 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.948649 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.948912 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.949251 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.949434 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-422lc" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.951024 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.952265 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.953534 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.967644 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-sbh66" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.968869 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.982580 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.992565 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 28 17:33:45 crc kubenswrapper[5001]: I0128 17:33:45.999212 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.050206 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.050905 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.055142 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.137674 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: W0128 17:33:46.151056 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420bd810_85d0_4ced_bcd8_3ae62c8c79e4.slice/crio-e935fc2ebe7cf665a938fe6199b9ddce291b121e9eb827fd20297cab9610968d WatchSource:0}: Error finding container e935fc2ebe7cf665a938fe6199b9ddce291b121e9eb827fd20297cab9610968d: Status 404 returned error can't find the container with id e935fc2ebe7cf665a938fe6199b9ddce291b121e9eb827fd20297cab9610968d Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155012 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bc7adbb-0250-4320-98f4-7a0a69b77724-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155067 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155096 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efb190a-f4c2-4761-9fab-7e26fc702121-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155130 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155155 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wndw\" (UniqueName: \"kubernetes.io/projected/4efb190a-f4c2-4761-9fab-7e26fc702121-kube-api-access-7wndw\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155181 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-config-data-default\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155219 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr87v\" (UniqueName: \"kubernetes.io/projected/1bc7adbb-0250-4320-98f4-7a0a69b77724-kube-api-access-cr87v\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155242 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bc7adbb-0250-4320-98f4-7a0a69b77724-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155261 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efb190a-f4c2-4761-9fab-7e26fc702121-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155281 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155305 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155339 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155362 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bc7adbb-0250-4320-98f4-7a0a69b77724-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155394 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155448 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-kolla-config\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155474 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bc7adbb-0250-4320-98f4-7a0a69b77724-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.155516 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4efb190a-f4c2-4761-9fab-7e26fc702121-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.160242 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.161449 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.162136 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.165923 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-xtxxk" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.166106 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.196491 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257273 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257727 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bc7adbb-0250-4320-98f4-7a0a69b77724-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257757 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spx7\" (UniqueName: \"kubernetes.io/projected/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-kube-api-access-6spx7\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257782 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257813 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-kolla-config\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257853 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-kolla-config\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257878 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bc7adbb-0250-4320-98f4-7a0a69b77724-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257906 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-config-data\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257946 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4efb190a-f4c2-4761-9fab-7e26fc702121-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.257988 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bc7adbb-0250-4320-98f4-7a0a69b77724-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258018 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258041 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efb190a-f4c2-4761-9fab-7e26fc702121-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258066 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258093 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wndw\" (UniqueName: \"kubernetes.io/projected/4efb190a-f4c2-4761-9fab-7e26fc702121-kube-api-access-7wndw\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258115 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-config-data-default\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258153 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr87v\" (UniqueName: \"kubernetes.io/projected/1bc7adbb-0250-4320-98f4-7a0a69b77724-kube-api-access-cr87v\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258174 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bc7adbb-0250-4320-98f4-7a0a69b77724-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258200 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efb190a-f4c2-4761-9fab-7e26fc702121-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258224 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258248 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258284 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.258518 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.259572 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bc7adbb-0250-4320-98f4-7a0a69b77724-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.259889 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.260004 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-config-data-default\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.260011 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4efb190a-f4c2-4761-9fab-7e26fc702121-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.260430 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4efb190a-f4c2-4761-9fab-7e26fc702121-kolla-config\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.260740 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bc7adbb-0250-4320-98f4-7a0a69b77724-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.263446 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bc7adbb-0250-4320-98f4-7a0a69b77724-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.265487 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bc7adbb-0250-4320-98f4-7a0a69b77724-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.268753 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efb190a-f4c2-4761-9fab-7e26fc702121-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.268787 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bc7adbb-0250-4320-98f4-7a0a69b77724-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.268805 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efb190a-f4c2-4761-9fab-7e26fc702121-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.269053 5001 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.269097 5001 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/72cc639d0a1dd714c397202c612f65d9c2366f8bbffb9dfe9104632e73fb9678/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.270652 5001 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.270683 5001 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/841ee1aef1a01b4b1099587ce1f50323462b48a03e31f28def105fcc7f206abc/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.286250 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wndw\" (UniqueName: \"kubernetes.io/projected/4efb190a-f4c2-4761-9fab-7e26fc702121-kube-api-access-7wndw\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.286397 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr87v\" (UniqueName: \"kubernetes.io/projected/1bc7adbb-0250-4320-98f4-7a0a69b77724-kube-api-access-cr87v\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.318379 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da1dad23-d7bd-4a54-a662-d0482c1ead59\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bc7adbb-0250-4320-98f4-7a0a69b77724\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.326068 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55fc16d9-e230-47a7-b51d-d05fa9a6d61c\") pod \"openstack-galera-0\" (UID: \"4efb190a-f4c2-4761-9fab-7e26fc702121\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.362303 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6spx7\" (UniqueName: \"kubernetes.io/projected/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-kube-api-access-6spx7\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.362361 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-kolla-config\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.362419 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-config-data\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.363460 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-kolla-config\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.363498 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-config-data\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.382328 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6spx7\" (UniqueName: \"kubernetes.io/projected/cd92fc04-2e00-4c0a-b704-204aeeb70ff1-kube-api-access-6spx7\") pod \"memcached-0\" (UID: \"cd92fc04-2e00-4c0a-b704-204aeeb70ff1\") " pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.481063 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.594269 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.618625 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.629622 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.934709 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"bdcddc03-22a9-44af-8c79-67fe309358ec","Type":"ContainerStarted","Data":"5c610aca73af268f8398f1b65000bb299fa6390c191ab197db478aca42769d82"} Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.936041 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"420bd810-85d0-4ced-bcd8-3ae62c8c79e4","Type":"ContainerStarted","Data":"e935fc2ebe7cf665a938fe6199b9ddce291b121e9eb827fd20297cab9610968d"} Jan 28 17:33:46 crc kubenswrapper[5001]: I0128 17:33:46.938764 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 28 17:33:46 crc kubenswrapper[5001]: W0128 17:33:46.964958 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd92fc04_2e00_4c0a_b704_204aeeb70ff1.slice/crio-a96fd841482903eb43d749abdd7fc3a8b1af2befe29f4cad60d1d9f01f236b8c WatchSource:0}: Error finding container a96fd841482903eb43d749abdd7fc3a8b1af2befe29f4cad60d1d9f01f236b8c: Status 404 returned error can't find the container with id a96fd841482903eb43d749abdd7fc3a8b1af2befe29f4cad60d1d9f01f236b8c Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.137164 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 28 17:33:47 crc kubenswrapper[5001]: W0128 17:33:47.142225 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc7adbb_0250_4320_98f4_7a0a69b77724.slice/crio-8e0c07105365d29f6e3c32c5cf1f974726a3fd42b765899a4916d83cb34844c5 WatchSource:0}: Error finding container 8e0c07105365d29f6e3c32c5cf1f974726a3fd42b765899a4916d83cb34844c5: Status 404 returned error can't find the container with id 8e0c07105365d29f6e3c32c5cf1f974726a3fd42b765899a4916d83cb34844c5 Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.172609 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.206868 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.208529 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.211349 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.211665 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.211864 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.212436 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-26nv9" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.218128 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376563 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376641 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27cec822-2561-4682-bb1c-3fe4fd0805f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376699 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376741 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376770 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vbp\" (UniqueName: \"kubernetes.io/projected/27cec822-2561-4682-bb1c-3fe4fd0805f4-kube-api-access-s8vbp\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376802 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cec822-2561-4682-bb1c-3fe4fd0805f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376834 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.376858 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cec822-2561-4682-bb1c-3fe4fd0805f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.480284 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.480903 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27cec822-2561-4682-bb1c-3fe4fd0805f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.481456 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27cec822-2561-4682-bb1c-3fe4fd0805f4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.481489 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.483951 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.484199 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.484234 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vbp\" (UniqueName: \"kubernetes.io/projected/27cec822-2561-4682-bb1c-3fe4fd0805f4-kube-api-access-s8vbp\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.484271 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cec822-2561-4682-bb1c-3fe4fd0805f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.484318 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cec822-2561-4682-bb1c-3fe4fd0805f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.484342 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.485834 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.486648 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27cec822-2561-4682-bb1c-3fe4fd0805f4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.489323 5001 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.490094 5001 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d35bf51b6c43fa7104ba89fce1cc1e2274a521b4d592b7317606190df4ffb308/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.504520 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cec822-2561-4682-bb1c-3fe4fd0805f4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.509183 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vbp\" (UniqueName: \"kubernetes.io/projected/27cec822-2561-4682-bb1c-3fe4fd0805f4-kube-api-access-s8vbp\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.510797 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cec822-2561-4682-bb1c-3fe4fd0805f4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.537355 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5c3079e-3b1c-42b7-96ad-f4120ae84988\") pod \"openstack-cell1-galera-0\" (UID: \"27cec822-2561-4682-bb1c-3fe4fd0805f4\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.841352 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.945564 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"cd92fc04-2e00-4c0a-b704-204aeeb70ff1","Type":"ContainerStarted","Data":"a96fd841482903eb43d749abdd7fc3a8b1af2befe29f4cad60d1d9f01f236b8c"} Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.948317 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"4efb190a-f4c2-4761-9fab-7e26fc702121","Type":"ContainerStarted","Data":"e1d2f030f84fad31f21e3ed2a5e4fc6b0b7fb3a23d4fd5cbdabd9367c099f4bd"} Jan 28 17:33:47 crc kubenswrapper[5001]: I0128 17:33:47.950071 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"1bc7adbb-0250-4320-98f4-7a0a69b77724","Type":"ContainerStarted","Data":"8e0c07105365d29f6e3c32c5cf1f974726a3fd42b765899a4916d83cb34844c5"} Jan 28 17:33:48 crc kubenswrapper[5001]: I0128 17:33:48.426461 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 28 17:33:48 crc kubenswrapper[5001]: W0128 17:33:48.430041 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27cec822_2561_4682_bb1c_3fe4fd0805f4.slice/crio-74aa20a92b33c88f1515d9a7adcab8d34d9fa99c8784e73a027f586bd02c1835 WatchSource:0}: Error finding container 74aa20a92b33c88f1515d9a7adcab8d34d9fa99c8784e73a027f586bd02c1835: Status 404 returned error can't find the container with id 74aa20a92b33c88f1515d9a7adcab8d34d9fa99c8784e73a027f586bd02c1835 Jan 28 17:33:48 crc kubenswrapper[5001]: I0128 17:33:48.959885 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"27cec822-2561-4682-bb1c-3fe4fd0805f4","Type":"ContainerStarted","Data":"74aa20a92b33c88f1515d9a7adcab8d34d9fa99c8784e73a027f586bd02c1835"} Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.627277 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.628027 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q44jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_nova-kuttl-default(420bd810-85d0-4ced-bcd8-3ae62c8c79e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.630021 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-server-0" podUID="420bd810-85d0-4ced-bcd8-3ae62c8c79e4" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.780463 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.780614 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr87v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_nova-kuttl-default(1bc7adbb-0250-4320-98f4-7a0a69b77724): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.782275 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="1bc7adbb-0250-4320-98f4-7a0a69b77724" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.812037 5001 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.812241 5001 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdr7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-broadcaster-server-0_nova-kuttl-default(bdcddc03-22a9-44af-8c79-67fe309358ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 17:34:10 crc kubenswrapper[5001]: E0128 17:34:10.813432 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="bdcddc03-22a9-44af-8c79-67fe309358ec" Jan 28 17:34:11 crc kubenswrapper[5001]: I0128 17:34:11.116082 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"cd92fc04-2e00-4c0a-b704-204aeeb70ff1","Type":"ContainerStarted","Data":"d0397efd00ae740481e6e04a074911455f3934584a9216587ec6bae90d745781"} Jan 28 17:34:11 crc kubenswrapper[5001]: I0128 17:34:11.117253 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 28 17:34:11 crc kubenswrapper[5001]: I0128 17:34:11.118714 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"4efb190a-f4c2-4761-9fab-7e26fc702121","Type":"ContainerStarted","Data":"835319a85106139d22727a3afce8887479ed956e1d222d1aa44ca9874797047c"} Jan 28 17:34:11 crc kubenswrapper[5001]: I0128 17:34:11.119918 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"27cec822-2561-4682-bb1c-3fe4fd0805f4","Type":"ContainerStarted","Data":"2cb3f61b4c77055c73d6332eaa5a4c97e566731d4d6610dde7851ae164ca1060"} Jan 28 17:34:11 crc kubenswrapper[5001]: E0128 17:34:11.121009 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="nova-kuttl-default/rabbitmq-server-0" podUID="420bd810-85d0-4ced-bcd8-3ae62c8c79e4" Jan 28 17:34:11 crc kubenswrapper[5001]: E0128 17:34:11.121397 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="bdcddc03-22a9-44af-8c79-67fe309358ec" Jan 28 17:34:11 crc kubenswrapper[5001]: E0128 17:34:11.122076 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="1bc7adbb-0250-4320-98f4-7a0a69b77724" Jan 28 17:34:11 crc kubenswrapper[5001]: I0128 17:34:11.141004 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.31757407 podStartE2EDuration="25.140966255s" podCreationTimestamp="2026-01-28 17:33:46 +0000 UTC" firstStartedPulling="2026-01-28 17:33:46.972430659 +0000 UTC m=+1073.140218890" lastFinishedPulling="2026-01-28 17:34:10.795822855 +0000 UTC m=+1096.963611075" observedRunningTime="2026-01-28 17:34:11.135791105 +0000 UTC m=+1097.303579345" watchObservedRunningTime="2026-01-28 17:34:11.140966255 +0000 UTC m=+1097.308754485" Jan 28 17:34:15 crc kubenswrapper[5001]: I0128 17:34:15.147649 5001 generic.go:334] "Generic (PLEG): container finished" podID="27cec822-2561-4682-bb1c-3fe4fd0805f4" containerID="2cb3f61b4c77055c73d6332eaa5a4c97e566731d4d6610dde7851ae164ca1060" exitCode=0 Jan 28 17:34:15 crc kubenswrapper[5001]: I0128 17:34:15.147795 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"27cec822-2561-4682-bb1c-3fe4fd0805f4","Type":"ContainerDied","Data":"2cb3f61b4c77055c73d6332eaa5a4c97e566731d4d6610dde7851ae164ca1060"} Jan 28 17:34:15 crc kubenswrapper[5001]: I0128 17:34:15.152323 5001 generic.go:334] "Generic (PLEG): container finished" podID="4efb190a-f4c2-4761-9fab-7e26fc702121" containerID="835319a85106139d22727a3afce8887479ed956e1d222d1aa44ca9874797047c" exitCode=0 Jan 28 17:34:15 crc kubenswrapper[5001]: I0128 17:34:15.152362 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"4efb190a-f4c2-4761-9fab-7e26fc702121","Type":"ContainerDied","Data":"835319a85106139d22727a3afce8887479ed956e1d222d1aa44ca9874797047c"} Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.160912 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"27cec822-2561-4682-bb1c-3fe4fd0805f4","Type":"ContainerStarted","Data":"61eca2b2c018f085cc795a912de36f5e6d366fdda215f44ad4cbdfdb09b9a54c"} Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.163223 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"4efb190a-f4c2-4761-9fab-7e26fc702121","Type":"ContainerStarted","Data":"8f182d7a4ce6382b692dda92179905ee4f845b35cccd6e5691314e6297b1a2da"} Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.183711 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=7.833400904 podStartE2EDuration="30.183690392s" podCreationTimestamp="2026-01-28 17:33:46 +0000 UTC" firstStartedPulling="2026-01-28 17:33:48.436468095 +0000 UTC m=+1074.604256325" lastFinishedPulling="2026-01-28 17:34:10.786757583 +0000 UTC m=+1096.954545813" observedRunningTime="2026-01-28 17:34:16.177928775 +0000 UTC m=+1102.345717015" watchObservedRunningTime="2026-01-28 17:34:16.183690392 +0000 UTC m=+1102.351478632" Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.207717 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=8.568443016 podStartE2EDuration="32.207692477s" podCreationTimestamp="2026-01-28 17:33:44 +0000 UTC" firstStartedPulling="2026-01-28 17:33:47.17919782 +0000 UTC m=+1073.346986050" lastFinishedPulling="2026-01-28 17:34:10.818447281 +0000 UTC m=+1096.986235511" observedRunningTime="2026-01-28 17:34:16.203331111 +0000 UTC m=+1102.371119351" watchObservedRunningTime="2026-01-28 17:34:16.207692477 +0000 UTC m=+1102.375480707" Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.485178 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.619349 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:34:16 crc kubenswrapper[5001]: I0128 17:34:16.619393 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:34:17 crc kubenswrapper[5001]: I0128 17:34:17.841581 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:34:17 crc kubenswrapper[5001]: I0128 17:34:17.842691 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:34:20 crc kubenswrapper[5001]: I0128 17:34:20.278389 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:34:20 crc kubenswrapper[5001]: I0128 17:34:20.357842 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="27cec822-2561-4682-bb1c-3fe4fd0805f4" containerName="galera" probeResult="failure" output=< Jan 28 17:34:20 crc kubenswrapper[5001]: wsrep_local_state_comment (Joined) differs from Synced Jan 28 17:34:20 crc kubenswrapper[5001]: > Jan 28 17:34:20 crc kubenswrapper[5001]: I0128 17:34:20.694855 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:34:20 crc kubenswrapper[5001]: I0128 17:34:20.756817 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.004078 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-cp64w"] Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.005815 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.009092 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.016988 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-cp64w"] Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.179259 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47ae528-5781-4e04-8994-1bffba4078af-operator-scripts\") pod \"root-account-create-update-cp64w\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.179601 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2rk\" (UniqueName: \"kubernetes.io/projected/c47ae528-5781-4e04-8994-1bffba4078af-kube-api-access-5b2rk\") pod \"root-account-create-update-cp64w\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.281513 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2rk\" (UniqueName: \"kubernetes.io/projected/c47ae528-5781-4e04-8994-1bffba4078af-kube-api-access-5b2rk\") pod \"root-account-create-update-cp64w\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.281701 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47ae528-5781-4e04-8994-1bffba4078af-operator-scripts\") pod \"root-account-create-update-cp64w\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.282599 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47ae528-5781-4e04-8994-1bffba4078af-operator-scripts\") pod \"root-account-create-update-cp64w\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.299620 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2rk\" (UniqueName: \"kubernetes.io/projected/c47ae528-5781-4e04-8994-1bffba4078af-kube-api-access-5b2rk\") pod \"root-account-create-update-cp64w\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.321254 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.751793 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-cp64w"] Jan 28 17:34:25 crc kubenswrapper[5001]: W0128 17:34:25.756653 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc47ae528_5781_4e04_8994_1bffba4078af.slice/crio-6e1f7c9b3ff84662e2a186b2025b4c0ec80b0288ec7507261bdd9e61ee4822ae WatchSource:0}: Error finding container 6e1f7c9b3ff84662e2a186b2025b4c0ec80b0288ec7507261bdd9e61ee4822ae: Status 404 returned error can't find the container with id 6e1f7c9b3ff84662e2a186b2025b4c0ec80b0288ec7507261bdd9e61ee4822ae Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.953492 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-jvk5f"] Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.954945 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:25 crc kubenswrapper[5001]: I0128 17:34:25.962876 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-jvk5f"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.054830 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-d461-account-create-update-zsz4n"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.055696 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.061367 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.068951 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-d461-account-create-update-zsz4n"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.096044 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be66815-6cf5-429b-8bd0-95eb7e898655-operator-scripts\") pod \"keystone-db-create-jvk5f\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.096141 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnq68\" (UniqueName: \"kubernetes.io/projected/5be66815-6cf5-429b-8bd0-95eb7e898655-kube-api-access-bnq68\") pod \"keystone-db-create-jvk5f\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.197308 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be66815-6cf5-429b-8bd0-95eb7e898655-operator-scripts\") pod \"keystone-db-create-jvk5f\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.197368 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnq68\" (UniqueName: \"kubernetes.io/projected/5be66815-6cf5-429b-8bd0-95eb7e898655-kube-api-access-bnq68\") pod \"keystone-db-create-jvk5f\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.197423 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3c059-fbe0-479b-8abc-cb0018604e0f-operator-scripts\") pod \"keystone-d461-account-create-update-zsz4n\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.197450 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2jl\" (UniqueName: \"kubernetes.io/projected/5ac3c059-fbe0-479b-8abc-cb0018604e0f-kube-api-access-vs2jl\") pod \"keystone-d461-account-create-update-zsz4n\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.198148 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be66815-6cf5-429b-8bd0-95eb7e898655-operator-scripts\") pod \"keystone-db-create-jvk5f\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.215226 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnq68\" (UniqueName: \"kubernetes.io/projected/5be66815-6cf5-429b-8bd0-95eb7e898655-kube-api-access-bnq68\") pod \"keystone-db-create-jvk5f\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.262124 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-cp64w" event={"ID":"c47ae528-5781-4e04-8994-1bffba4078af","Type":"ContainerStarted","Data":"4e1636acb75e638978dd96f1415a966fdebd6936fba28b60a3ccf524badcf620"} Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.262166 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-cp64w" event={"ID":"c47ae528-5781-4e04-8994-1bffba4078af","Type":"ContainerStarted","Data":"6e1f7c9b3ff84662e2a186b2025b4c0ec80b0288ec7507261bdd9e61ee4822ae"} Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.263884 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"1bc7adbb-0250-4320-98f4-7a0a69b77724","Type":"ContainerStarted","Data":"6878fa1be98126c24891e80cbe9d20218801e0e0473cab820895ab93335a4d98"} Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.265231 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"420bd810-85d0-4ced-bcd8-3ae62c8c79e4","Type":"ContainerStarted","Data":"6e1e124a0107c8162b9382665171f87a54ebbf99d6730d3017fab63816c1ca9c"} Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.287500 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-cp64w" podStartSLOduration=2.287478808 podStartE2EDuration="2.287478808s" podCreationTimestamp="2026-01-28 17:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:26.282477553 +0000 UTC m=+1112.450265783" watchObservedRunningTime="2026-01-28 17:34:26.287478808 +0000 UTC m=+1112.455267038" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.296224 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-6z2n5"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.297202 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.300620 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3c059-fbe0-479b-8abc-cb0018604e0f-operator-scripts\") pod \"keystone-d461-account-create-update-zsz4n\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.300671 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs2jl\" (UniqueName: \"kubernetes.io/projected/5ac3c059-fbe0-479b-8abc-cb0018604e0f-kube-api-access-vs2jl\") pod \"keystone-d461-account-create-update-zsz4n\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.301371 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3c059-fbe0-479b-8abc-cb0018604e0f-operator-scripts\") pod \"keystone-d461-account-create-update-zsz4n\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.310774 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-6z2n5"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.319949 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.326523 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs2jl\" (UniqueName: \"kubernetes.io/projected/5ac3c059-fbe0-479b-8abc-cb0018604e0f-kube-api-access-vs2jl\") pod \"keystone-d461-account-create-update-zsz4n\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.379333 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.402862 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca854e6e-32b7-42d4-86b0-148253804265-operator-scripts\") pod \"placement-db-create-6z2n5\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.402929 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89xl8\" (UniqueName: \"kubernetes.io/projected/ca854e6e-32b7-42d4-86b0-148253804265-kube-api-access-89xl8\") pod \"placement-db-create-6z2n5\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.503194 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-6341-account-create-update-zhnmv"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.504394 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.504442 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca854e6e-32b7-42d4-86b0-148253804265-operator-scripts\") pod \"placement-db-create-6z2n5\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.504481 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89xl8\" (UniqueName: \"kubernetes.io/projected/ca854e6e-32b7-42d4-86b0-148253804265-kube-api-access-89xl8\") pod \"placement-db-create-6z2n5\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.505753 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca854e6e-32b7-42d4-86b0-148253804265-operator-scripts\") pod \"placement-db-create-6z2n5\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.508767 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.532171 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-6341-account-create-update-zhnmv"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.533389 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89xl8\" (UniqueName: \"kubernetes.io/projected/ca854e6e-32b7-42d4-86b0-148253804265-kube-api-access-89xl8\") pod \"placement-db-create-6z2n5\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.606205 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2h7\" (UniqueName: \"kubernetes.io/projected/fb719788-2aef-4b17-9706-0bd463e0ebe7-kube-api-access-tr2h7\") pod \"placement-6341-account-create-update-zhnmv\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.606328 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb719788-2aef-4b17-9706-0bd463e0ebe7-operator-scripts\") pod \"placement-6341-account-create-update-zhnmv\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.610634 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.709131 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb719788-2aef-4b17-9706-0bd463e0ebe7-operator-scripts\") pod \"placement-6341-account-create-update-zhnmv\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.709253 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2h7\" (UniqueName: \"kubernetes.io/projected/fb719788-2aef-4b17-9706-0bd463e0ebe7-kube-api-access-tr2h7\") pod \"placement-6341-account-create-update-zhnmv\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.710444 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb719788-2aef-4b17-9706-0bd463e0ebe7-operator-scripts\") pod \"placement-6341-account-create-update-zhnmv\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.729139 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2h7\" (UniqueName: \"kubernetes.io/projected/fb719788-2aef-4b17-9706-0bd463e0ebe7-kube-api-access-tr2h7\") pod \"placement-6341-account-create-update-zhnmv\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.876606 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-jvk5f"] Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.903816 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:26 crc kubenswrapper[5001]: I0128 17:34:26.943805 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-d461-account-create-update-zsz4n"] Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.090231 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-6z2n5"] Jan 28 17:34:27 crc kubenswrapper[5001]: W0128 17:34:27.098371 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca854e6e_32b7_42d4_86b0_148253804265.slice/crio-0f4abcab46e1acb1541eab61670e004423e8fd7e85e4971185c15ec041194b8c WatchSource:0}: Error finding container 0f4abcab46e1acb1541eab61670e004423e8fd7e85e4971185c15ec041194b8c: Status 404 returned error can't find the container with id 0f4abcab46e1acb1541eab61670e004423e8fd7e85e4971185c15ec041194b8c Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.162803 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-6341-account-create-update-zhnmv"] Jan 28 17:34:27 crc kubenswrapper[5001]: W0128 17:34:27.177113 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb719788_2aef_4b17_9706_0bd463e0ebe7.slice/crio-76cd2829ff0dbddca5698d3344026914b351064bd979ca6624547fe9466d93e1 WatchSource:0}: Error finding container 76cd2829ff0dbddca5698d3344026914b351064bd979ca6624547fe9466d93e1: Status 404 returned error can't find the container with id 76cd2829ff0dbddca5698d3344026914b351064bd979ca6624547fe9466d93e1 Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.288356 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" event={"ID":"5ac3c059-fbe0-479b-8abc-cb0018604e0f","Type":"ContainerStarted","Data":"940800e2a498586430520210f77868ab8d4ed81366f12e54a4c6198ce47167a1"} Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.291489 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-jvk5f" event={"ID":"5be66815-6cf5-429b-8bd0-95eb7e898655","Type":"ContainerStarted","Data":"735c5592929a7c0fd8d64b746274f8d9d69cd8f431a6ff820aa773fdca855bc0"} Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.291540 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-jvk5f" event={"ID":"5be66815-6cf5-429b-8bd0-95eb7e898655","Type":"ContainerStarted","Data":"1a120e0251fed0b5ae79863d7f4c462dfdb240a94d152a8b3ff3d23ef052a334"} Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.293951 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-6z2n5" event={"ID":"ca854e6e-32b7-42d4-86b0-148253804265","Type":"ContainerStarted","Data":"0f4abcab46e1acb1541eab61670e004423e8fd7e85e4971185c15ec041194b8c"} Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.295435 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"bdcddc03-22a9-44af-8c79-67fe309358ec","Type":"ContainerStarted","Data":"81a29eb8d4b42b5747be8da178a68524a9543abe9ddf86ead9ef0b4c3b7d71d1"} Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.296873 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" event={"ID":"fb719788-2aef-4b17-9706-0bd463e0ebe7","Type":"ContainerStarted","Data":"76cd2829ff0dbddca5698d3344026914b351064bd979ca6624547fe9466d93e1"} Jan 28 17:34:27 crc kubenswrapper[5001]: I0128 17:34:27.926215 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.304438 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-6z2n5" event={"ID":"ca854e6e-32b7-42d4-86b0-148253804265","Type":"ContainerStarted","Data":"0ecdb842c25423d1391be705b257066d0cfb83c0d3e3d8a9856b607174aed0e7"} Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.305703 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" event={"ID":"fb719788-2aef-4b17-9706-0bd463e0ebe7","Type":"ContainerStarted","Data":"43a6c0803bbe577b3aff6c88dcced1a196833ca0aabba98bdd67435c7fc96bf3"} Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.306962 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" event={"ID":"5ac3c059-fbe0-479b-8abc-cb0018604e0f","Type":"ContainerStarted","Data":"2129a9942767ce256fd3be53a912afbb591634eaf8807091119ca4627cc2a8c0"} Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.321324 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-create-6z2n5" podStartSLOduration=2.321303741 podStartE2EDuration="2.321303741s" podCreationTimestamp="2026-01-28 17:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:28.315612066 +0000 UTC m=+1114.483400296" watchObservedRunningTime="2026-01-28 17:34:28.321303741 +0000 UTC m=+1114.489091971" Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.328583 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" podStartSLOduration=2.3285650909999998 podStartE2EDuration="2.328565091s" podCreationTimestamp="2026-01-28 17:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:28.327261403 +0000 UTC m=+1114.495049633" watchObservedRunningTime="2026-01-28 17:34:28.328565091 +0000 UTC m=+1114.496353321" Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.342810 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-create-jvk5f" podStartSLOduration=3.342793453 podStartE2EDuration="3.342793453s" podCreationTimestamp="2026-01-28 17:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:28.340179687 +0000 UTC m=+1114.507967917" watchObservedRunningTime="2026-01-28 17:34:28.342793453 +0000 UTC m=+1114.510581683" Jan 28 17:34:28 crc kubenswrapper[5001]: I0128 17:34:28.358582 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" podStartSLOduration=2.3585645299999998 podStartE2EDuration="2.35856453s" podCreationTimestamp="2026-01-28 17:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:28.354717149 +0000 UTC m=+1114.522505399" watchObservedRunningTime="2026-01-28 17:34:28.35856453 +0000 UTC m=+1114.526352760" Jan 28 17:34:29 crc kubenswrapper[5001]: I0128 17:34:29.313192 5001 generic.go:334] "Generic (PLEG): container finished" podID="5be66815-6cf5-429b-8bd0-95eb7e898655" containerID="735c5592929a7c0fd8d64b746274f8d9d69cd8f431a6ff820aa773fdca855bc0" exitCode=0 Jan 28 17:34:29 crc kubenswrapper[5001]: I0128 17:34:29.313261 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-jvk5f" event={"ID":"5be66815-6cf5-429b-8bd0-95eb7e898655","Type":"ContainerDied","Data":"735c5592929a7c0fd8d64b746274f8d9d69cd8f431a6ff820aa773fdca855bc0"} Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.619635 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.772857 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be66815-6cf5-429b-8bd0-95eb7e898655-operator-scripts\") pod \"5be66815-6cf5-429b-8bd0-95eb7e898655\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.773051 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnq68\" (UniqueName: \"kubernetes.io/projected/5be66815-6cf5-429b-8bd0-95eb7e898655-kube-api-access-bnq68\") pod \"5be66815-6cf5-429b-8bd0-95eb7e898655\" (UID: \"5be66815-6cf5-429b-8bd0-95eb7e898655\") " Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.774121 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be66815-6cf5-429b-8bd0-95eb7e898655-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5be66815-6cf5-429b-8bd0-95eb7e898655" (UID: "5be66815-6cf5-429b-8bd0-95eb7e898655"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.780211 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be66815-6cf5-429b-8bd0-95eb7e898655-kube-api-access-bnq68" (OuterVolumeSpecName: "kube-api-access-bnq68") pod "5be66815-6cf5-429b-8bd0-95eb7e898655" (UID: "5be66815-6cf5-429b-8bd0-95eb7e898655"). InnerVolumeSpecName "kube-api-access-bnq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.875222 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be66815-6cf5-429b-8bd0-95eb7e898655-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:30 crc kubenswrapper[5001]: I0128 17:34:30.875262 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnq68\" (UniqueName: \"kubernetes.io/projected/5be66815-6cf5-429b-8bd0-95eb7e898655-kube-api-access-bnq68\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:31 crc kubenswrapper[5001]: I0128 17:34:31.330932 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-jvk5f" event={"ID":"5be66815-6cf5-429b-8bd0-95eb7e898655","Type":"ContainerDied","Data":"1a120e0251fed0b5ae79863d7f4c462dfdb240a94d152a8b3ff3d23ef052a334"} Jan 28 17:34:31 crc kubenswrapper[5001]: I0128 17:34:31.331019 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a120e0251fed0b5ae79863d7f4c462dfdb240a94d152a8b3ff3d23ef052a334" Jan 28 17:34:31 crc kubenswrapper[5001]: I0128 17:34:31.331087 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-jvk5f" Jan 28 17:34:31 crc kubenswrapper[5001]: I0128 17:34:31.349087 5001 generic.go:334] "Generic (PLEG): container finished" podID="c47ae528-5781-4e04-8994-1bffba4078af" containerID="4e1636acb75e638978dd96f1415a966fdebd6936fba28b60a3ccf524badcf620" exitCode=0 Jan 28 17:34:31 crc kubenswrapper[5001]: I0128 17:34:31.349156 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-cp64w" event={"ID":"c47ae528-5781-4e04-8994-1bffba4078af","Type":"ContainerDied","Data":"4e1636acb75e638978dd96f1415a966fdebd6936fba28b60a3ccf524badcf620"} Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.357195 5001 generic.go:334] "Generic (PLEG): container finished" podID="ca854e6e-32b7-42d4-86b0-148253804265" containerID="0ecdb842c25423d1391be705b257066d0cfb83c0d3e3d8a9856b607174aed0e7" exitCode=0 Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.358600 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-6z2n5" event={"ID":"ca854e6e-32b7-42d4-86b0-148253804265","Type":"ContainerDied","Data":"0ecdb842c25423d1391be705b257066d0cfb83c0d3e3d8a9856b607174aed0e7"} Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.640637 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.802903 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47ae528-5781-4e04-8994-1bffba4078af-operator-scripts\") pod \"c47ae528-5781-4e04-8994-1bffba4078af\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.802951 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2rk\" (UniqueName: \"kubernetes.io/projected/c47ae528-5781-4e04-8994-1bffba4078af-kube-api-access-5b2rk\") pod \"c47ae528-5781-4e04-8994-1bffba4078af\" (UID: \"c47ae528-5781-4e04-8994-1bffba4078af\") " Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.803735 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47ae528-5781-4e04-8994-1bffba4078af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c47ae528-5781-4e04-8994-1bffba4078af" (UID: "c47ae528-5781-4e04-8994-1bffba4078af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.807808 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47ae528-5781-4e04-8994-1bffba4078af-kube-api-access-5b2rk" (OuterVolumeSpecName: "kube-api-access-5b2rk") pod "c47ae528-5781-4e04-8994-1bffba4078af" (UID: "c47ae528-5781-4e04-8994-1bffba4078af"). InnerVolumeSpecName "kube-api-access-5b2rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.905006 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47ae528-5781-4e04-8994-1bffba4078af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:32 crc kubenswrapper[5001]: I0128 17:34:32.905041 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b2rk\" (UniqueName: \"kubernetes.io/projected/c47ae528-5781-4e04-8994-1bffba4078af-kube-api-access-5b2rk\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.366021 5001 generic.go:334] "Generic (PLEG): container finished" podID="fb719788-2aef-4b17-9706-0bd463e0ebe7" containerID="43a6c0803bbe577b3aff6c88dcced1a196833ca0aabba98bdd67435c7fc96bf3" exitCode=0 Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.366098 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" event={"ID":"fb719788-2aef-4b17-9706-0bd463e0ebe7","Type":"ContainerDied","Data":"43a6c0803bbe577b3aff6c88dcced1a196833ca0aabba98bdd67435c7fc96bf3"} Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.367838 5001 generic.go:334] "Generic (PLEG): container finished" podID="5ac3c059-fbe0-479b-8abc-cb0018604e0f" containerID="2129a9942767ce256fd3be53a912afbb591634eaf8807091119ca4627cc2a8c0" exitCode=0 Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.367926 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" event={"ID":"5ac3c059-fbe0-479b-8abc-cb0018604e0f","Type":"ContainerDied","Data":"2129a9942767ce256fd3be53a912afbb591634eaf8807091119ca4627cc2a8c0"} Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.369497 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-cp64w" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.369624 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-cp64w" event={"ID":"c47ae528-5781-4e04-8994-1bffba4078af","Type":"ContainerDied","Data":"6e1f7c9b3ff84662e2a186b2025b4c0ec80b0288ec7507261bdd9e61ee4822ae"} Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.369678 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1f7c9b3ff84662e2a186b2025b4c0ec80b0288ec7507261bdd9e61ee4822ae" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.617796 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.718280 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca854e6e-32b7-42d4-86b0-148253804265-operator-scripts\") pod \"ca854e6e-32b7-42d4-86b0-148253804265\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.718565 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89xl8\" (UniqueName: \"kubernetes.io/projected/ca854e6e-32b7-42d4-86b0-148253804265-kube-api-access-89xl8\") pod \"ca854e6e-32b7-42d4-86b0-148253804265\" (UID: \"ca854e6e-32b7-42d4-86b0-148253804265\") " Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.718707 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca854e6e-32b7-42d4-86b0-148253804265-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca854e6e-32b7-42d4-86b0-148253804265" (UID: "ca854e6e-32b7-42d4-86b0-148253804265"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.718918 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca854e6e-32b7-42d4-86b0-148253804265-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.725652 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca854e6e-32b7-42d4-86b0-148253804265-kube-api-access-89xl8" (OuterVolumeSpecName: "kube-api-access-89xl8") pod "ca854e6e-32b7-42d4-86b0-148253804265" (UID: "ca854e6e-32b7-42d4-86b0-148253804265"). InnerVolumeSpecName "kube-api-access-89xl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:33 crc kubenswrapper[5001]: I0128 17:34:33.821633 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89xl8\" (UniqueName: \"kubernetes.io/projected/ca854e6e-32b7-42d4-86b0-148253804265-kube-api-access-89xl8\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.379479 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-6z2n5" event={"ID":"ca854e6e-32b7-42d4-86b0-148253804265","Type":"ContainerDied","Data":"0f4abcab46e1acb1541eab61670e004423e8fd7e85e4971185c15ec041194b8c"} Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.379530 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4abcab46e1acb1541eab61670e004423e8fd7e85e4971185c15ec041194b8c" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.379786 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-6z2n5" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.768424 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.773460 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.939310 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3c059-fbe0-479b-8abc-cb0018604e0f-operator-scripts\") pod \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.939752 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr2h7\" (UniqueName: \"kubernetes.io/projected/fb719788-2aef-4b17-9706-0bd463e0ebe7-kube-api-access-tr2h7\") pod \"fb719788-2aef-4b17-9706-0bd463e0ebe7\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.939967 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac3c059-fbe0-479b-8abc-cb0018604e0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ac3c059-fbe0-479b-8abc-cb0018604e0f" (UID: "5ac3c059-fbe0-479b-8abc-cb0018604e0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.940519 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs2jl\" (UniqueName: \"kubernetes.io/projected/5ac3c059-fbe0-479b-8abc-cb0018604e0f-kube-api-access-vs2jl\") pod \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\" (UID: \"5ac3c059-fbe0-479b-8abc-cb0018604e0f\") " Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.940557 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb719788-2aef-4b17-9706-0bd463e0ebe7-operator-scripts\") pod \"fb719788-2aef-4b17-9706-0bd463e0ebe7\" (UID: \"fb719788-2aef-4b17-9706-0bd463e0ebe7\") " Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.941037 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac3c059-fbe0-479b-8abc-cb0018604e0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.941079 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb719788-2aef-4b17-9706-0bd463e0ebe7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb719788-2aef-4b17-9706-0bd463e0ebe7" (UID: "fb719788-2aef-4b17-9706-0bd463e0ebe7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.943155 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb719788-2aef-4b17-9706-0bd463e0ebe7-kube-api-access-tr2h7" (OuterVolumeSpecName: "kube-api-access-tr2h7") pod "fb719788-2aef-4b17-9706-0bd463e0ebe7" (UID: "fb719788-2aef-4b17-9706-0bd463e0ebe7"). InnerVolumeSpecName "kube-api-access-tr2h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:34 crc kubenswrapper[5001]: I0128 17:34:34.943634 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac3c059-fbe0-479b-8abc-cb0018604e0f-kube-api-access-vs2jl" (OuterVolumeSpecName: "kube-api-access-vs2jl") pod "5ac3c059-fbe0-479b-8abc-cb0018604e0f" (UID: "5ac3c059-fbe0-479b-8abc-cb0018604e0f"). InnerVolumeSpecName "kube-api-access-vs2jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.042264 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs2jl\" (UniqueName: \"kubernetes.io/projected/5ac3c059-fbe0-479b-8abc-cb0018604e0f-kube-api-access-vs2jl\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.042299 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb719788-2aef-4b17-9706-0bd463e0ebe7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.042309 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr2h7\" (UniqueName: \"kubernetes.io/projected/fb719788-2aef-4b17-9706-0bd463e0ebe7-kube-api-access-tr2h7\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.388805 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" event={"ID":"fb719788-2aef-4b17-9706-0bd463e0ebe7","Type":"ContainerDied","Data":"76cd2829ff0dbddca5698d3344026914b351064bd979ca6624547fe9466d93e1"} Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.388853 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76cd2829ff0dbddca5698d3344026914b351064bd979ca6624547fe9466d93e1" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.388865 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-6341-account-create-update-zhnmv" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.390830 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" event={"ID":"5ac3c059-fbe0-479b-8abc-cb0018604e0f","Type":"ContainerDied","Data":"940800e2a498586430520210f77868ab8d4ed81366f12e54a4c6198ce47167a1"} Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.390864 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940800e2a498586430520210f77868ab8d4ed81366f12e54a4c6198ce47167a1" Jan 28 17:34:35 crc kubenswrapper[5001]: I0128 17:34:35.390907 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-d461-account-create-update-zsz4n" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.208897 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-cp64w"] Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.214563 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-cp64w"] Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276456 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-rwnnn"] Jan 28 17:34:36 crc kubenswrapper[5001]: E0128 17:34:36.276752 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac3c059-fbe0-479b-8abc-cb0018604e0f" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276774 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac3c059-fbe0-479b-8abc-cb0018604e0f" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: E0128 17:34:36.276791 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca854e6e-32b7-42d4-86b0-148253804265" containerName="mariadb-database-create" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276797 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca854e6e-32b7-42d4-86b0-148253804265" containerName="mariadb-database-create" Jan 28 17:34:36 crc kubenswrapper[5001]: E0128 17:34:36.276813 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb719788-2aef-4b17-9706-0bd463e0ebe7" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276819 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb719788-2aef-4b17-9706-0bd463e0ebe7" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: E0128 17:34:36.276830 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47ae528-5781-4e04-8994-1bffba4078af" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276836 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47ae528-5781-4e04-8994-1bffba4078af" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: E0128 17:34:36.276848 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be66815-6cf5-429b-8bd0-95eb7e898655" containerName="mariadb-database-create" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276854 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be66815-6cf5-429b-8bd0-95eb7e898655" containerName="mariadb-database-create" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.276997 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac3c059-fbe0-479b-8abc-cb0018604e0f" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.277013 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be66815-6cf5-429b-8bd0-95eb7e898655" containerName="mariadb-database-create" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.277022 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47ae528-5781-4e04-8994-1bffba4078af" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.277029 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca854e6e-32b7-42d4-86b0-148253804265" containerName="mariadb-database-create" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.277041 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb719788-2aef-4b17-9706-0bd463e0ebe7" containerName="mariadb-account-create-update" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.277574 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.279724 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.287310 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-rwnnn"] Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.362197 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwm45\" (UniqueName: \"kubernetes.io/projected/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-kube-api-access-gwm45\") pod \"root-account-create-update-rwnnn\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.362485 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-operator-scripts\") pod \"root-account-create-update-rwnnn\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.464347 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwm45\" (UniqueName: \"kubernetes.io/projected/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-kube-api-access-gwm45\") pod \"root-account-create-update-rwnnn\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.464847 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-operator-scripts\") pod \"root-account-create-update-rwnnn\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.465722 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-operator-scripts\") pod \"root-account-create-update-rwnnn\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.482191 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwm45\" (UniqueName: \"kubernetes.io/projected/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-kube-api-access-gwm45\") pod \"root-account-create-update-rwnnn\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.594369 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:36 crc kubenswrapper[5001]: I0128 17:34:36.603378 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47ae528-5781-4e04-8994-1bffba4078af" path="/var/lib/kubelet/pods/c47ae528-5781-4e04-8994-1bffba4078af/volumes" Jan 28 17:34:37 crc kubenswrapper[5001]: I0128 17:34:37.089757 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-rwnnn"] Jan 28 17:34:37 crc kubenswrapper[5001]: W0128 17:34:37.095139 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0548880a_31f0_4d7d_9eb8_5b402a8cc67a.slice/crio-2f7d0e9587fcce4ae740b72b1b2c351a73b39b660506424d5927bb72c3fd34df WatchSource:0}: Error finding container 2f7d0e9587fcce4ae740b72b1b2c351a73b39b660506424d5927bb72c3fd34df: Status 404 returned error can't find the container with id 2f7d0e9587fcce4ae740b72b1b2c351a73b39b660506424d5927bb72c3fd34df Jan 28 17:34:37 crc kubenswrapper[5001]: I0128 17:34:37.978295 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-rwnnn" event={"ID":"0548880a-31f0-4d7d-9eb8-5b402a8cc67a","Type":"ContainerStarted","Data":"2f7d0e9587fcce4ae740b72b1b2c351a73b39b660506424d5927bb72c3fd34df"} Jan 28 17:34:38 crc kubenswrapper[5001]: I0128 17:34:38.986370 5001 generic.go:334] "Generic (PLEG): container finished" podID="0548880a-31f0-4d7d-9eb8-5b402a8cc67a" containerID="2363bafdad4f4444eba4fa9e8b59218aae530e7daf3740d20c0fdca2c4417bf3" exitCode=0 Jan 28 17:34:38 crc kubenswrapper[5001]: I0128 17:34:38.986449 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-rwnnn" event={"ID":"0548880a-31f0-4d7d-9eb8-5b402a8cc67a","Type":"ContainerDied","Data":"2363bafdad4f4444eba4fa9e8b59218aae530e7daf3740d20c0fdca2c4417bf3"} Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.247661 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.266711 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwm45\" (UniqueName: \"kubernetes.io/projected/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-kube-api-access-gwm45\") pod \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.266789 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-operator-scripts\") pod \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\" (UID: \"0548880a-31f0-4d7d-9eb8-5b402a8cc67a\") " Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.267860 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0548880a-31f0-4d7d-9eb8-5b402a8cc67a" (UID: "0548880a-31f0-4d7d-9eb8-5b402a8cc67a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.279482 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-kube-api-access-gwm45" (OuterVolumeSpecName: "kube-api-access-gwm45") pod "0548880a-31f0-4d7d-9eb8-5b402a8cc67a" (UID: "0548880a-31f0-4d7d-9eb8-5b402a8cc67a"). InnerVolumeSpecName "kube-api-access-gwm45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.368426 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwm45\" (UniqueName: \"kubernetes.io/projected/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-kube-api-access-gwm45\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:40 crc kubenswrapper[5001]: I0128 17:34:40.368762 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0548880a-31f0-4d7d-9eb8-5b402a8cc67a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:41 crc kubenswrapper[5001]: I0128 17:34:41.004477 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-rwnnn" event={"ID":"0548880a-31f0-4d7d-9eb8-5b402a8cc67a","Type":"ContainerDied","Data":"2f7d0e9587fcce4ae740b72b1b2c351a73b39b660506424d5927bb72c3fd34df"} Jan 28 17:34:41 crc kubenswrapper[5001]: I0128 17:34:41.005103 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7d0e9587fcce4ae740b72b1b2c351a73b39b660506424d5927bb72c3fd34df" Jan 28 17:34:41 crc kubenswrapper[5001]: I0128 17:34:41.004527 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-rwnnn" Jan 28 17:34:58 crc kubenswrapper[5001]: I0128 17:34:58.132944 5001 generic.go:334] "Generic (PLEG): container finished" podID="1bc7adbb-0250-4320-98f4-7a0a69b77724" containerID="6878fa1be98126c24891e80cbe9d20218801e0e0473cab820895ab93335a4d98" exitCode=0 Jan 28 17:34:58 crc kubenswrapper[5001]: I0128 17:34:58.133480 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"1bc7adbb-0250-4320-98f4-7a0a69b77724","Type":"ContainerDied","Data":"6878fa1be98126c24891e80cbe9d20218801e0e0473cab820895ab93335a4d98"} Jan 28 17:34:58 crc kubenswrapper[5001]: I0128 17:34:58.135674 5001 generic.go:334] "Generic (PLEG): container finished" podID="420bd810-85d0-4ced-bcd8-3ae62c8c79e4" containerID="6e1e124a0107c8162b9382665171f87a54ebbf99d6730d3017fab63816c1ca9c" exitCode=0 Jan 28 17:34:58 crc kubenswrapper[5001]: I0128 17:34:58.135709 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"420bd810-85d0-4ced-bcd8-3ae62c8c79e4","Type":"ContainerDied","Data":"6e1e124a0107c8162b9382665171f87a54ebbf99d6730d3017fab63816c1ca9c"} Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.143754 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"1bc7adbb-0250-4320-98f4-7a0a69b77724","Type":"ContainerStarted","Data":"6025cac29ae60c537a8a5101371d24d021c73f65e1cea870639c4c2e3a9e618f"} Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.145048 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.146969 5001 generic.go:334] "Generic (PLEG): container finished" podID="bdcddc03-22a9-44af-8c79-67fe309358ec" containerID="81a29eb8d4b42b5747be8da178a68524a9543abe9ddf86ead9ef0b4c3b7d71d1" exitCode=0 Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.147046 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"bdcddc03-22a9-44af-8c79-67fe309358ec","Type":"ContainerDied","Data":"81a29eb8d4b42b5747be8da178a68524a9543abe9ddf86ead9ef0b4c3b7d71d1"} Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.150269 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"420bd810-85d0-4ced-bcd8-3ae62c8c79e4","Type":"ContainerStarted","Data":"877cc32bee289738fe4f28cc0f3e13c6c8551a6c342f126f6f059dec8b3cab0c"} Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.150800 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.184760 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=37.968000406 podStartE2EDuration="1m15.184737221s" podCreationTimestamp="2026-01-28 17:33:44 +0000 UTC" firstStartedPulling="2026-01-28 17:33:47.146288517 +0000 UTC m=+1073.314076747" lastFinishedPulling="2026-01-28 17:34:24.363025332 +0000 UTC m=+1110.530813562" observedRunningTime="2026-01-28 17:34:59.176885893 +0000 UTC m=+1145.344674163" watchObservedRunningTime="2026-01-28 17:34:59.184737221 +0000 UTC m=+1145.352525451" Jan 28 17:34:59 crc kubenswrapper[5001]: I0128 17:34:59.237392 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.037626731 podStartE2EDuration="1m15.237374216s" podCreationTimestamp="2026-01-28 17:33:44 +0000 UTC" firstStartedPulling="2026-01-28 17:33:46.161803124 +0000 UTC m=+1072.329591354" lastFinishedPulling="2026-01-28 17:34:24.361550609 +0000 UTC m=+1110.529338839" observedRunningTime="2026-01-28 17:34:59.230439695 +0000 UTC m=+1145.398227925" watchObservedRunningTime="2026-01-28 17:34:59.237374216 +0000 UTC m=+1145.405162446" Jan 28 17:35:00 crc kubenswrapper[5001]: I0128 17:35:00.160393 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"bdcddc03-22a9-44af-8c79-67fe309358ec","Type":"ContainerStarted","Data":"aa9b6d0a5bd1917d023828f5ba463e599c98ffff87f8b2006218eda4cb212521"} Jan 28 17:35:00 crc kubenswrapper[5001]: I0128 17:35:00.161263 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:35:15 crc kubenswrapper[5001]: I0128 17:35:15.630221 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 17:35:15 crc kubenswrapper[5001]: I0128 17:35:15.662045 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=-9223371945.192757 podStartE2EDuration="1m31.662019477s" podCreationTimestamp="2026-01-28 17:33:44 +0000 UTC" firstStartedPulling="2026-01-28 17:33:46.663916901 +0000 UTC m=+1072.831705121" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:00.187373768 +0000 UTC m=+1146.355162018" watchObservedRunningTime="2026-01-28 17:35:15.662019477 +0000 UTC m=+1161.829807747" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.054251 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.097835 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-rx4q2"] Jan 28 17:35:16 crc kubenswrapper[5001]: E0128 17:35:16.098219 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0548880a-31f0-4d7d-9eb8-5b402a8cc67a" containerName="mariadb-account-create-update" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.098239 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0548880a-31f0-4d7d-9eb8-5b402a8cc67a" containerName="mariadb-account-create-update" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.098426 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0548880a-31f0-4d7d-9eb8-5b402a8cc67a" containerName="mariadb-account-create-update" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.099015 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.101636 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.101949 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-99bsk" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.102142 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.102841 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.110593 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-rx4q2"] Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.289424 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-combined-ca-bundle\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.289486 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzz4p\" (UniqueName: \"kubernetes.io/projected/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-kube-api-access-gzz4p\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.289519 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-config-data\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.390402 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-combined-ca-bundle\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.390468 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzz4p\" (UniqueName: \"kubernetes.io/projected/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-kube-api-access-gzz4p\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.390497 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-config-data\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.398124 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-combined-ca-bundle\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.409288 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-config-data\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.411948 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzz4p\" (UniqueName: \"kubernetes.io/projected/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-kube-api-access-gzz4p\") pod \"keystone-db-sync-rx4q2\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.420124 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.603673 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 17:35:16 crc kubenswrapper[5001]: I0128 17:35:16.877675 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-rx4q2"] Jan 28 17:35:16 crc kubenswrapper[5001]: W0128 17:35:16.880875 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b1b7798_ae74_4b6b_ade7_f282fa5e3253.slice/crio-a9e6180f1ba83d00c4801fc04f5cffc96643ad8e4691f61995d527e25d20e196 WatchSource:0}: Error finding container a9e6180f1ba83d00c4801fc04f5cffc96643ad8e4691f61995d527e25d20e196: Status 404 returned error can't find the container with id a9e6180f1ba83d00c4801fc04f5cffc96643ad8e4691f61995d527e25d20e196 Jan 28 17:35:17 crc kubenswrapper[5001]: I0128 17:35:17.292260 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-rx4q2" event={"ID":"3b1b7798-ae74-4b6b-ade7-f282fa5e3253","Type":"ContainerStarted","Data":"a9e6180f1ba83d00c4801fc04f5cffc96643ad8e4691f61995d527e25d20e196"} Jan 28 17:35:24 crc kubenswrapper[5001]: I0128 17:35:24.344143 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-rx4q2" event={"ID":"3b1b7798-ae74-4b6b-ade7-f282fa5e3253","Type":"ContainerStarted","Data":"a5444bfa04b6403bee32f437e0f37005900a5fc5f2cc65f33fef4d91898918f0"} Jan 28 17:35:24 crc kubenswrapper[5001]: I0128 17:35:24.364117 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-rx4q2" podStartSLOduration=1.711696227 podStartE2EDuration="8.364098277s" podCreationTimestamp="2026-01-28 17:35:16 +0000 UTC" firstStartedPulling="2026-01-28 17:35:16.882776644 +0000 UTC m=+1163.050564874" lastFinishedPulling="2026-01-28 17:35:23.535178694 +0000 UTC m=+1169.702966924" observedRunningTime="2026-01-28 17:35:24.358248908 +0000 UTC m=+1170.526037138" watchObservedRunningTime="2026-01-28 17:35:24.364098277 +0000 UTC m=+1170.531886507" Jan 28 17:35:27 crc kubenswrapper[5001]: I0128 17:35:27.375116 5001 generic.go:334] "Generic (PLEG): container finished" podID="3b1b7798-ae74-4b6b-ade7-f282fa5e3253" containerID="a5444bfa04b6403bee32f437e0f37005900a5fc5f2cc65f33fef4d91898918f0" exitCode=0 Jan 28 17:35:27 crc kubenswrapper[5001]: I0128 17:35:27.375197 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-rx4q2" event={"ID":"3b1b7798-ae74-4b6b-ade7-f282fa5e3253","Type":"ContainerDied","Data":"a5444bfa04b6403bee32f437e0f37005900a5fc5f2cc65f33fef4d91898918f0"} Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.656119 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.772891 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-combined-ca-bundle\") pod \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.773047 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzz4p\" (UniqueName: \"kubernetes.io/projected/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-kube-api-access-gzz4p\") pod \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.773165 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-config-data\") pod \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\" (UID: \"3b1b7798-ae74-4b6b-ade7-f282fa5e3253\") " Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.779281 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-kube-api-access-gzz4p" (OuterVolumeSpecName: "kube-api-access-gzz4p") pod "3b1b7798-ae74-4b6b-ade7-f282fa5e3253" (UID: "3b1b7798-ae74-4b6b-ade7-f282fa5e3253"). InnerVolumeSpecName "kube-api-access-gzz4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.795803 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1b7798-ae74-4b6b-ade7-f282fa5e3253" (UID: "3b1b7798-ae74-4b6b-ade7-f282fa5e3253"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.812614 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-config-data" (OuterVolumeSpecName: "config-data") pod "3b1b7798-ae74-4b6b-ade7-f282fa5e3253" (UID: "3b1b7798-ae74-4b6b-ade7-f282fa5e3253"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.875276 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.875314 5001 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:28 crc kubenswrapper[5001]: I0128 17:35:28.875332 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzz4p\" (UniqueName: \"kubernetes.io/projected/3b1b7798-ae74-4b6b-ade7-f282fa5e3253-kube-api-access-gzz4p\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.399290 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-rx4q2" event={"ID":"3b1b7798-ae74-4b6b-ade7-f282fa5e3253","Type":"ContainerDied","Data":"a9e6180f1ba83d00c4801fc04f5cffc96643ad8e4691f61995d527e25d20e196"} Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.399611 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e6180f1ba83d00c4801fc04f5cffc96643ad8e4691f61995d527e25d20e196" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.399357 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-rx4q2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.618728 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6vmc2"] Jan 28 17:35:29 crc kubenswrapper[5001]: E0128 17:35:29.619152 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1b7798-ae74-4b6b-ade7-f282fa5e3253" containerName="keystone-db-sync" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.619175 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1b7798-ae74-4b6b-ade7-f282fa5e3253" containerName="keystone-db-sync" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.619341 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1b7798-ae74-4b6b-ade7-f282fa5e3253" containerName="keystone-db-sync" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.619916 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.625840 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.626001 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.626018 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.626491 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.636856 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6vmc2"] Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.638721 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-99bsk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.778701 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-wkjqk"] Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.782523 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.785035 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.790180 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.790544 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-jhqs8" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.796027 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcqc\" (UniqueName: \"kubernetes.io/projected/ac235dad-3448-404b-845f-c77dc7eeb6d8-kube-api-access-6mcqc\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.796087 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-config-data\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.796159 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-combined-ca-bundle\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.796223 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-fernet-keys\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.796247 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-scripts\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.796272 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-credential-keys\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.798344 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-wkjqk"] Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897554 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-combined-ca-bundle\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897611 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl84d\" (UniqueName: \"kubernetes.io/projected/5929507d-7d57-4274-bd5f-f784c279d763-kube-api-access-rl84d\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897644 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-fernet-keys\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897666 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-scripts\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897686 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-credential-keys\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897727 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-config-data\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897746 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-combined-ca-bundle\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897795 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcqc\" (UniqueName: \"kubernetes.io/projected/ac235dad-3448-404b-845f-c77dc7eeb6d8-kube-api-access-6mcqc\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897816 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-config-data\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897839 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5929507d-7d57-4274-bd5f-f784c279d763-logs\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.897876 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-scripts\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.903098 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-combined-ca-bundle\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.903245 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-config-data\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.903663 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-scripts\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.908102 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-fernet-keys\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.908628 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-credential-keys\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.921603 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcqc\" (UniqueName: \"kubernetes.io/projected/ac235dad-3448-404b-845f-c77dc7eeb6d8-kube-api-access-6mcqc\") pod \"keystone-bootstrap-6vmc2\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.955660 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.999051 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl84d\" (UniqueName: \"kubernetes.io/projected/5929507d-7d57-4274-bd5f-f784c279d763-kube-api-access-rl84d\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.999154 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-config-data\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.999184 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-combined-ca-bundle\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.999246 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5929507d-7d57-4274-bd5f-f784c279d763-logs\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:29 crc kubenswrapper[5001]: I0128 17:35:29.999276 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-scripts\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.000374 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5929507d-7d57-4274-bd5f-f784c279d763-logs\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.003721 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-config-data\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.003796 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-combined-ca-bundle\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.005616 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-scripts\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.016676 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl84d\" (UniqueName: \"kubernetes.io/projected/5929507d-7d57-4274-bd5f-f784c279d763-kube-api-access-rl84d\") pod \"placement-db-sync-wkjqk\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.110337 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.433849 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6vmc2"] Jan 28 17:35:30 crc kubenswrapper[5001]: I0128 17:35:30.566824 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-wkjqk"] Jan 28 17:35:30 crc kubenswrapper[5001]: W0128 17:35:30.575801 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5929507d_7d57_4274_bd5f_f784c279d763.slice/crio-a230dcc228c4f6902535adcd874ca6d2ece4b60b52f86287d20c9de8f6d5072e WatchSource:0}: Error finding container a230dcc228c4f6902535adcd874ca6d2ece4b60b52f86287d20c9de8f6d5072e: Status 404 returned error can't find the container with id a230dcc228c4f6902535adcd874ca6d2ece4b60b52f86287d20c9de8f6d5072e Jan 28 17:35:31 crc kubenswrapper[5001]: I0128 17:35:31.415921 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" event={"ID":"ac235dad-3448-404b-845f-c77dc7eeb6d8","Type":"ContainerStarted","Data":"da5502ebd60c2839eef6cea5a18613d8d970a4e96f86a72b682bcf58b1ac0919"} Jan 28 17:35:31 crc kubenswrapper[5001]: I0128 17:35:31.416293 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" event={"ID":"ac235dad-3448-404b-845f-c77dc7eeb6d8","Type":"ContainerStarted","Data":"ef7bbeb4954a347fb2125b7fcf93a523ba12eb3592be939f4a051b6df09d7941"} Jan 28 17:35:31 crc kubenswrapper[5001]: I0128 17:35:31.418820 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-wkjqk" event={"ID":"5929507d-7d57-4274-bd5f-f784c279d763","Type":"ContainerStarted","Data":"a230dcc228c4f6902535adcd874ca6d2ece4b60b52f86287d20c9de8f6d5072e"} Jan 28 17:35:31 crc kubenswrapper[5001]: I0128 17:35:31.435872 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" podStartSLOduration=2.435852464 podStartE2EDuration="2.435852464s" podCreationTimestamp="2026-01-28 17:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:31.430232762 +0000 UTC m=+1177.598020992" watchObservedRunningTime="2026-01-28 17:35:31.435852464 +0000 UTC m=+1177.603640704" Jan 28 17:35:34 crc kubenswrapper[5001]: I0128 17:35:34.474301 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-wkjqk" event={"ID":"5929507d-7d57-4274-bd5f-f784c279d763","Type":"ContainerStarted","Data":"a0f2cfaf86049378ef611c0b9be88ad6ad7ca696b97d61b543b936b956350897"} Jan 28 17:35:34 crc kubenswrapper[5001]: I0128 17:35:34.476218 5001 generic.go:334] "Generic (PLEG): container finished" podID="ac235dad-3448-404b-845f-c77dc7eeb6d8" containerID="da5502ebd60c2839eef6cea5a18613d8d970a4e96f86a72b682bcf58b1ac0919" exitCode=0 Jan 28 17:35:34 crc kubenswrapper[5001]: I0128 17:35:34.476250 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" event={"ID":"ac235dad-3448-404b-845f-c77dc7eeb6d8","Type":"ContainerDied","Data":"da5502ebd60c2839eef6cea5a18613d8d970a4e96f86a72b682bcf58b1ac0919"} Jan 28 17:35:34 crc kubenswrapper[5001]: I0128 17:35:34.499916 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-wkjqk" podStartSLOduration=2.115884942 podStartE2EDuration="5.499898123s" podCreationTimestamp="2026-01-28 17:35:29 +0000 UTC" firstStartedPulling="2026-01-28 17:35:30.578658556 +0000 UTC m=+1176.746446786" lastFinishedPulling="2026-01-28 17:35:33.962671737 +0000 UTC m=+1180.130459967" observedRunningTime="2026-01-28 17:35:34.491420808 +0000 UTC m=+1180.659209058" watchObservedRunningTime="2026-01-28 17:35:34.499898123 +0000 UTC m=+1180.667686353" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.808069 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.901460 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-fernet-keys\") pod \"ac235dad-3448-404b-845f-c77dc7eeb6d8\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.901530 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-config-data\") pod \"ac235dad-3448-404b-845f-c77dc7eeb6d8\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.901561 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcqc\" (UniqueName: \"kubernetes.io/projected/ac235dad-3448-404b-845f-c77dc7eeb6d8-kube-api-access-6mcqc\") pod \"ac235dad-3448-404b-845f-c77dc7eeb6d8\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.901623 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-credential-keys\") pod \"ac235dad-3448-404b-845f-c77dc7eeb6d8\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.901653 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-scripts\") pod \"ac235dad-3448-404b-845f-c77dc7eeb6d8\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.901697 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-combined-ca-bundle\") pod \"ac235dad-3448-404b-845f-c77dc7eeb6d8\" (UID: \"ac235dad-3448-404b-845f-c77dc7eeb6d8\") " Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.908071 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ac235dad-3448-404b-845f-c77dc7eeb6d8" (UID: "ac235dad-3448-404b-845f-c77dc7eeb6d8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.908291 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ac235dad-3448-404b-845f-c77dc7eeb6d8" (UID: "ac235dad-3448-404b-845f-c77dc7eeb6d8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.911139 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-scripts" (OuterVolumeSpecName: "scripts") pod "ac235dad-3448-404b-845f-c77dc7eeb6d8" (UID: "ac235dad-3448-404b-845f-c77dc7eeb6d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.914916 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac235dad-3448-404b-845f-c77dc7eeb6d8-kube-api-access-6mcqc" (OuterVolumeSpecName: "kube-api-access-6mcqc") pod "ac235dad-3448-404b-845f-c77dc7eeb6d8" (UID: "ac235dad-3448-404b-845f-c77dc7eeb6d8"). InnerVolumeSpecName "kube-api-access-6mcqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.925276 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-config-data" (OuterVolumeSpecName: "config-data") pod "ac235dad-3448-404b-845f-c77dc7eeb6d8" (UID: "ac235dad-3448-404b-845f-c77dc7eeb6d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[5001]: I0128 17:35:35.927356 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac235dad-3448-404b-845f-c77dc7eeb6d8" (UID: "ac235dad-3448-404b-845f-c77dc7eeb6d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.003998 5001 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.004044 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.004058 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcqc\" (UniqueName: \"kubernetes.io/projected/ac235dad-3448-404b-845f-c77dc7eeb6d8-kube-api-access-6mcqc\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.004073 5001 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.004085 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.004097 5001 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac235dad-3448-404b-845f-c77dc7eeb6d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.490850 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" event={"ID":"ac235dad-3448-404b-845f-c77dc7eeb6d8","Type":"ContainerDied","Data":"ef7bbeb4954a347fb2125b7fcf93a523ba12eb3592be939f4a051b6df09d7941"} Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.490893 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7bbeb4954a347fb2125b7fcf93a523ba12eb3592be939f4a051b6df09d7941" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.490906 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6vmc2" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.569988 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6vmc2"] Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.576254 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6vmc2"] Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.604423 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac235dad-3448-404b-845f-c77dc7eeb6d8" path="/var/lib/kubelet/pods/ac235dad-3448-404b-845f-c77dc7eeb6d8/volumes" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.669928 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gw8xf"] Jan 28 17:35:36 crc kubenswrapper[5001]: E0128 17:35:36.670300 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac235dad-3448-404b-845f-c77dc7eeb6d8" containerName="keystone-bootstrap" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.670324 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac235dad-3448-404b-845f-c77dc7eeb6d8" containerName="keystone-bootstrap" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.670510 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac235dad-3448-404b-845f-c77dc7eeb6d8" containerName="keystone-bootstrap" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.671020 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.672897 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.673491 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.673633 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.674268 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.674886 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-99bsk" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.691825 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gw8xf"] Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.817537 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-scripts\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.817590 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-credential-keys\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.817814 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6wz\" (UniqueName: \"kubernetes.io/projected/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-kube-api-access-6m6wz\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.817873 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-fernet-keys\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.817954 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-combined-ca-bundle\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.818032 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-config-data\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.919222 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-scripts\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.919882 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-credential-keys\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.919943 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6wz\" (UniqueName: \"kubernetes.io/projected/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-kube-api-access-6m6wz\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.919964 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-fernet-keys\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.920042 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-combined-ca-bundle\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.920084 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-config-data\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.923383 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-combined-ca-bundle\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.923592 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-scripts\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.924537 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-fernet-keys\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.924679 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-credential-keys\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.931039 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-config-data\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.943774 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6wz\" (UniqueName: \"kubernetes.io/projected/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-kube-api-access-6m6wz\") pod \"keystone-bootstrap-gw8xf\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:36 crc kubenswrapper[5001]: I0128 17:35:36.997205 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:37 crc kubenswrapper[5001]: I0128 17:35:37.424438 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gw8xf"] Jan 28 17:35:37 crc kubenswrapper[5001]: I0128 17:35:37.497689 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" event={"ID":"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d","Type":"ContainerStarted","Data":"515634ab78eb06b3f9704365ba445aae90c0fbe67c4090dc9979cc9a1614d3bd"} Jan 28 17:35:38 crc kubenswrapper[5001]: I0128 17:35:38.521667 5001 generic.go:334] "Generic (PLEG): container finished" podID="5929507d-7d57-4274-bd5f-f784c279d763" containerID="a0f2cfaf86049378ef611c0b9be88ad6ad7ca696b97d61b543b936b956350897" exitCode=0 Jan 28 17:35:38 crc kubenswrapper[5001]: I0128 17:35:38.521748 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-wkjqk" event={"ID":"5929507d-7d57-4274-bd5f-f784c279d763","Type":"ContainerDied","Data":"a0f2cfaf86049378ef611c0b9be88ad6ad7ca696b97d61b543b936b956350897"} Jan 28 17:35:38 crc kubenswrapper[5001]: I0128 17:35:38.524057 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" event={"ID":"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d","Type":"ContainerStarted","Data":"cc45c85c7ec56e3e961c5863e4927f84487a54081e79d3d4bd9a9cb441e6a1e1"} Jan 28 17:35:38 crc kubenswrapper[5001]: I0128 17:35:38.557128 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" podStartSLOduration=2.5571079880000003 podStartE2EDuration="2.557107988s" podCreationTimestamp="2026-01-28 17:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:38.551905678 +0000 UTC m=+1184.719693938" watchObservedRunningTime="2026-01-28 17:35:38.557107988 +0000 UTC m=+1184.724896218" Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.873721 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.967303 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-scripts\") pod \"5929507d-7d57-4274-bd5f-f784c279d763\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.967406 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl84d\" (UniqueName: \"kubernetes.io/projected/5929507d-7d57-4274-bd5f-f784c279d763-kube-api-access-rl84d\") pod \"5929507d-7d57-4274-bd5f-f784c279d763\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.967530 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-config-data\") pod \"5929507d-7d57-4274-bd5f-f784c279d763\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.967680 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5929507d-7d57-4274-bd5f-f784c279d763-logs\") pod \"5929507d-7d57-4274-bd5f-f784c279d763\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.967727 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-combined-ca-bundle\") pod \"5929507d-7d57-4274-bd5f-f784c279d763\" (UID: \"5929507d-7d57-4274-bd5f-f784c279d763\") " Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.970272 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5929507d-7d57-4274-bd5f-f784c279d763-logs" (OuterVolumeSpecName: "logs") pod "5929507d-7d57-4274-bd5f-f784c279d763" (UID: "5929507d-7d57-4274-bd5f-f784c279d763"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.976106 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-scripts" (OuterVolumeSpecName: "scripts") pod "5929507d-7d57-4274-bd5f-f784c279d763" (UID: "5929507d-7d57-4274-bd5f-f784c279d763"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.976923 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5929507d-7d57-4274-bd5f-f784c279d763-kube-api-access-rl84d" (OuterVolumeSpecName: "kube-api-access-rl84d") pod "5929507d-7d57-4274-bd5f-f784c279d763" (UID: "5929507d-7d57-4274-bd5f-f784c279d763"). InnerVolumeSpecName "kube-api-access-rl84d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.991006 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5929507d-7d57-4274-bd5f-f784c279d763" (UID: "5929507d-7d57-4274-bd5f-f784c279d763"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:39 crc kubenswrapper[5001]: I0128 17:35:39.992539 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-config-data" (OuterVolumeSpecName: "config-data") pod "5929507d-7d57-4274-bd5f-f784c279d763" (UID: "5929507d-7d57-4274-bd5f-f784c279d763"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.069631 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.069703 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl84d\" (UniqueName: \"kubernetes.io/projected/5929507d-7d57-4274-bd5f-f784c279d763-kube-api-access-rl84d\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.069721 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.069733 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5929507d-7d57-4274-bd5f-f784c279d763-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.069744 5001 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5929507d-7d57-4274-bd5f-f784c279d763-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.541777 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-wkjqk" event={"ID":"5929507d-7d57-4274-bd5f-f784c279d763","Type":"ContainerDied","Data":"a230dcc228c4f6902535adcd874ca6d2ece4b60b52f86287d20c9de8f6d5072e"} Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.541830 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a230dcc228c4f6902535adcd874ca6d2ece4b60b52f86287d20c9de8f6d5072e" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.541813 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-wkjqk" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.544920 5001 generic.go:334] "Generic (PLEG): container finished" podID="b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" containerID="cc45c85c7ec56e3e961c5863e4927f84487a54081e79d3d4bd9a9cb441e6a1e1" exitCode=0 Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.544954 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" event={"ID":"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d","Type":"ContainerDied","Data":"cc45c85c7ec56e3e961c5863e4927f84487a54081e79d3d4bd9a9cb441e6a1e1"} Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.654007 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-b8bc4c9f4-htpjw"] Jan 28 17:35:40 crc kubenswrapper[5001]: E0128 17:35:40.654562 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5929507d-7d57-4274-bd5f-f784c279d763" containerName="placement-db-sync" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.654587 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5929507d-7d57-4274-bd5f-f784c279d763" containerName="placement-db-sync" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.654776 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5929507d-7d57-4274-bd5f-f784c279d763" containerName="placement-db-sync" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.655820 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.658409 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.658717 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-jhqs8" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.658838 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.665073 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-b8bc4c9f4-htpjw"] Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.779003 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d62q9\" (UniqueName: \"kubernetes.io/projected/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-kube-api-access-d62q9\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.779198 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-config-data\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.779326 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-logs\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.779400 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-combined-ca-bundle\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.779578 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-scripts\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.880709 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d62q9\" (UniqueName: \"kubernetes.io/projected/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-kube-api-access-d62q9\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.880831 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-config-data\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.880875 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-logs\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.880921 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-combined-ca-bundle\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.881011 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-scripts\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.881624 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-logs\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.885472 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-config-data\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.885827 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-scripts\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.886324 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-combined-ca-bundle\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.899921 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d62q9\" (UniqueName: \"kubernetes.io/projected/85ac1e3b-7a2c-4c90-adcb-34d4efd01f41-kube-api-access-d62q9\") pod \"placement-b8bc4c9f4-htpjw\" (UID: \"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41\") " pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:40 crc kubenswrapper[5001]: I0128 17:35:40.971966 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.412175 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-b8bc4c9f4-htpjw"] Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.555162 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" event={"ID":"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41","Type":"ContainerStarted","Data":"3ffb6bf0f5c3e2308a04c0e714edb180fb8e0e09df3151be87c79d811a413a7c"} Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.815367 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.902782 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m6wz\" (UniqueName: \"kubernetes.io/projected/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-kube-api-access-6m6wz\") pod \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.902871 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-config-data\") pod \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.902945 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-credential-keys\") pod \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.903001 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-combined-ca-bundle\") pod \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.903100 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-fernet-keys\") pod \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.903163 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-scripts\") pod \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\" (UID: \"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d\") " Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.908055 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-scripts" (OuterVolumeSpecName: "scripts") pod "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" (UID: "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.908074 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" (UID: "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.908829 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-kube-api-access-6m6wz" (OuterVolumeSpecName: "kube-api-access-6m6wz") pod "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" (UID: "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d"). InnerVolumeSpecName "kube-api-access-6m6wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.908907 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" (UID: "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.932124 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-config-data" (OuterVolumeSpecName: "config-data") pod "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" (UID: "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:41 crc kubenswrapper[5001]: I0128 17:35:41.938089 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" (UID: "b7c4ec90-8fe2-48ff-8ab8-478abb02b03d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.004785 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.004822 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m6wz\" (UniqueName: \"kubernetes.io/projected/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-kube-api-access-6m6wz\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.004833 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.004842 5001 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.004852 5001 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.004861 5001 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.564259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" event={"ID":"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41","Type":"ContainerStarted","Data":"26d76c2574eb66422c4761c6f7500fc83e24766f46594f006dc7813b3580f858"} Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.564598 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.564610 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" event={"ID":"85ac1e3b-7a2c-4c90-adcb-34d4efd01f41","Type":"ContainerStarted","Data":"7e1e442cbf48b2398db090daf34fdd6895a60f15e21f93b42a18c1f42ec7417c"} Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.564622 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.565684 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" event={"ID":"b7c4ec90-8fe2-48ff-8ab8-478abb02b03d","Type":"ContainerDied","Data":"515634ab78eb06b3f9704365ba445aae90c0fbe67c4090dc9979cc9a1614d3bd"} Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.565725 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="515634ab78eb06b3f9704365ba445aae90c0fbe67c4090dc9979cc9a1614d3bd" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.565778 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gw8xf" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.585851 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" podStartSLOduration=2.585827602 podStartE2EDuration="2.585827602s" podCreationTimestamp="2026-01-28 17:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:42.579555682 +0000 UTC m=+1188.747343932" watchObservedRunningTime="2026-01-28 17:35:42.585827602 +0000 UTC m=+1188.753615832" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.893599 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-848f6f5b7c-k72xs"] Jan 28 17:35:42 crc kubenswrapper[5001]: E0128 17:35:42.893968 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" containerName="keystone-bootstrap" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.894002 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" containerName="keystone-bootstrap" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.894152 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" containerName="keystone-bootstrap" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.894707 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.896762 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-99bsk" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.897305 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.897511 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.897662 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 17:35:42 crc kubenswrapper[5001]: I0128 17:35:42.910701 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-848f6f5b7c-k72xs"] Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.020367 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-credential-keys\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.020425 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-fernet-keys\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.020453 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-combined-ca-bundle\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.020490 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxxjg\" (UniqueName: \"kubernetes.io/projected/77f41721-97c0-4a00-83ca-c5fb40170cfa-kube-api-access-hxxjg\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.020885 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-config-data\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.021084 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-scripts\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.123445 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxxjg\" (UniqueName: \"kubernetes.io/projected/77f41721-97c0-4a00-83ca-c5fb40170cfa-kube-api-access-hxxjg\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.123568 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-config-data\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.123620 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-scripts\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.123652 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-credential-keys\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.123702 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-fernet-keys\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.123751 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-combined-ca-bundle\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.127835 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-scripts\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.132613 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-config-data\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.136012 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-fernet-keys\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.136052 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-credential-keys\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.136177 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f41721-97c0-4a00-83ca-c5fb40170cfa-combined-ca-bundle\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.140203 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxxjg\" (UniqueName: \"kubernetes.io/projected/77f41721-97c0-4a00-83ca-c5fb40170cfa-kube-api-access-hxxjg\") pod \"keystone-848f6f5b7c-k72xs\" (UID: \"77f41721-97c0-4a00-83ca-c5fb40170cfa\") " pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:43 crc kubenswrapper[5001]: I0128 17:35:43.210910 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:35:44 crc kubenswrapper[5001]: I0128 17:35:44.054010 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-848f6f5b7c-k72xs"] Jan 28 17:35:44 crc kubenswrapper[5001]: I0128 17:35:44.602996 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" event={"ID":"77f41721-97c0-4a00-83ca-c5fb40170cfa","Type":"ContainerStarted","Data":"28e265963480faee445b9500cd7d3802742a7ad338f416d19d0d46560de482ed"} Jan 28 17:35:44 crc kubenswrapper[5001]: I0128 17:35:44.603372 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" event={"ID":"77f41721-97c0-4a00-83ca-c5fb40170cfa","Type":"ContainerStarted","Data":"11d6f4f3d20a78ebc922fc9dfcf32295767e8e8b4f3ee8010e4b048e592afa71"} Jan 28 17:35:44 crc kubenswrapper[5001]: I0128 17:35:44.637354 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" podStartSLOduration=2.637334805 podStartE2EDuration="2.637334805s" podCreationTimestamp="2026-01-28 17:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:44.63369439 +0000 UTC m=+1190.801482630" watchObservedRunningTime="2026-01-28 17:35:44.637334805 +0000 UTC m=+1190.805123035" Jan 28 17:35:45 crc kubenswrapper[5001]: I0128 17:35:45.598991 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:36:04 crc kubenswrapper[5001]: I0128 17:36:04.834403 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:36:04 crc kubenswrapper[5001]: I0128 17:36:04.835833 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:36:12 crc kubenswrapper[5001]: I0128 17:36:12.451403 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:36:12 crc kubenswrapper[5001]: I0128 17:36:12.453028 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-b8bc4c9f4-htpjw" Jan 28 17:36:15 crc kubenswrapper[5001]: I0128 17:36:15.732197 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-848f6f5b7c-k72xs" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.199384 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.200557 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.202174 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.202590 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.203825 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-cd5rh" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.212076 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.326420 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7mm7\" (UniqueName: \"kubernetes.io/projected/2a3c2b88-a587-4e96-a947-ea4594144582-kube-api-access-d7mm7\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.326470 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.327103 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.327168 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config-secret\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.429633 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7mm7\" (UniqueName: \"kubernetes.io/projected/2a3c2b88-a587-4e96-a947-ea4594144582-kube-api-access-d7mm7\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.429719 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.429839 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.429916 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config-secret\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.431518 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.438589 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config-secret\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.440836 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.451041 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7mm7\" (UniqueName: \"kubernetes.io/projected/2a3c2b88-a587-4e96-a947-ea4594144582-kube-api-access-d7mm7\") pod \"openstackclient\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.518313 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.706576 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.755920 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.782867 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.784290 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.790021 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.962319 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grfp\" (UniqueName: \"kubernetes.io/projected/535ec83f-ed6c-460b-8369-d710976d266f-kube-api-access-6grfp\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.962431 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/535ec83f-ed6c-460b-8369-d710976d266f-openstack-config-secret\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.962458 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/535ec83f-ed6c-460b-8369-d710976d266f-openstack-config\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:16 crc kubenswrapper[5001]: I0128 17:36:16.962518 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535ec83f-ed6c-460b-8369-d710976d266f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.064290 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535ec83f-ed6c-460b-8369-d710976d266f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.064433 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6grfp\" (UniqueName: \"kubernetes.io/projected/535ec83f-ed6c-460b-8369-d710976d266f-kube-api-access-6grfp\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.064507 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/535ec83f-ed6c-460b-8369-d710976d266f-openstack-config-secret\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.064534 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/535ec83f-ed6c-460b-8369-d710976d266f-openstack-config\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.065480 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/535ec83f-ed6c-460b-8369-d710976d266f-openstack-config\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.069464 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535ec83f-ed6c-460b-8369-d710976d266f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.075865 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/535ec83f-ed6c-460b-8369-d710976d266f-openstack-config-secret\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: E0128 17:36:17.079114 5001 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 17:36:17 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_nova-kuttl-default_2a3c2b88-a587-4e96-a947-ea4594144582_0(57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f): error adding pod nova-kuttl-default_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f" Netns:"/var/run/netns/4223ca54-b8f9-4fa9-b522-e1eeb4a857ae" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=nova-kuttl-default;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f;K8S_POD_UID=2a3c2b88-a587-4e96-a947-ea4594144582" Path:"" ERRORED: error configuring pod [nova-kuttl-default/openstackclient] networking: [nova-kuttl-default/openstackclient/2a3c2b88-a587-4e96-a947-ea4594144582:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[nova-kuttl-default/openstackclient 57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f network default NAD default] [nova-kuttl-default/openstackclient 57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f network default NAD default] pod deleted before sandbox ADD operation began Jan 28 17:36:17 crc kubenswrapper[5001]: ' Jan 28 17:36:17 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:36:17 crc kubenswrapper[5001]: > Jan 28 17:36:17 crc kubenswrapper[5001]: E0128 17:36:17.079445 5001 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 17:36:17 crc kubenswrapper[5001]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_nova-kuttl-default_2a3c2b88-a587-4e96-a947-ea4594144582_0(57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f): error adding pod nova-kuttl-default_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f" Netns:"/var/run/netns/4223ca54-b8f9-4fa9-b522-e1eeb4a857ae" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=nova-kuttl-default;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f;K8S_POD_UID=2a3c2b88-a587-4e96-a947-ea4594144582" Path:"" ERRORED: error configuring pod [nova-kuttl-default/openstackclient] networking: [nova-kuttl-default/openstackclient/2a3c2b88-a587-4e96-a947-ea4594144582:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[nova-kuttl-default/openstackclient 57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f network default NAD default] [nova-kuttl-default/openstackclient 57b1a172df68933f119c22802d8f253efd5428311ac4ff8fe662dbe3abde092f network default NAD default] pod deleted before sandbox ADD operation began Jan 28 17:36:17 crc kubenswrapper[5001]: ' Jan 28 17:36:17 crc kubenswrapper[5001]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 17:36:17 crc kubenswrapper[5001]: > pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.084549 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6grfp\" (UniqueName: \"kubernetes.io/projected/535ec83f-ed6c-460b-8369-d710976d266f-kube-api-access-6grfp\") pod \"openstackclient\" (UID: \"535ec83f-ed6c-460b-8369-d710976d266f\") " pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.107703 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.514337 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.852120 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.852110 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"535ec83f-ed6c-460b-8369-d710976d266f","Type":"ContainerStarted","Data":"1b9d3120f7cbb7124438f0a5fa11f756dd795aa340f66f057db8ee1dfc47b5e6"} Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.856145 5001 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="nova-kuttl-default/openstackclient" oldPodUID="2a3c2b88-a587-4e96-a947-ea4594144582" podUID="535ec83f-ed6c-460b-8369-d710976d266f" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.862455 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.977277 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7mm7\" (UniqueName: \"kubernetes.io/projected/2a3c2b88-a587-4e96-a947-ea4594144582-kube-api-access-d7mm7\") pod \"2a3c2b88-a587-4e96-a947-ea4594144582\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.977352 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-combined-ca-bundle\") pod \"2a3c2b88-a587-4e96-a947-ea4594144582\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.977413 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config\") pod \"2a3c2b88-a587-4e96-a947-ea4594144582\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.977517 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config-secret\") pod \"2a3c2b88-a587-4e96-a947-ea4594144582\" (UID: \"2a3c2b88-a587-4e96-a947-ea4594144582\") " Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.979433 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2a3c2b88-a587-4e96-a947-ea4594144582" (UID: "2a3c2b88-a587-4e96-a947-ea4594144582"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.983586 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a3c2b88-a587-4e96-a947-ea4594144582" (UID: "2a3c2b88-a587-4e96-a947-ea4594144582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.983766 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3c2b88-a587-4e96-a947-ea4594144582-kube-api-access-d7mm7" (OuterVolumeSpecName: "kube-api-access-d7mm7") pod "2a3c2b88-a587-4e96-a947-ea4594144582" (UID: "2a3c2b88-a587-4e96-a947-ea4594144582"). InnerVolumeSpecName "kube-api-access-d7mm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:36:17 crc kubenswrapper[5001]: I0128 17:36:17.983859 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2a3c2b88-a587-4e96-a947-ea4594144582" (UID: "2a3c2b88-a587-4e96-a947-ea4594144582"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.079626 5001 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.079675 5001 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.079691 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7mm7\" (UniqueName: \"kubernetes.io/projected/2a3c2b88-a587-4e96-a947-ea4594144582-kube-api-access-d7mm7\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.079705 5001 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3c2b88-a587-4e96-a947-ea4594144582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.604587 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3c2b88-a587-4e96-a947-ea4594144582" path="/var/lib/kubelet/pods/2a3c2b88-a587-4e96-a947-ea4594144582/volumes" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.891916 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 17:36:18 crc kubenswrapper[5001]: I0128 17:36:18.899265 5001 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="nova-kuttl-default/openstackclient" oldPodUID="2a3c2b88-a587-4e96-a947-ea4594144582" podUID="535ec83f-ed6c-460b-8369-d710976d266f" Jan 28 17:36:31 crc kubenswrapper[5001]: I0128 17:36:31.260294 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"535ec83f-ed6c-460b-8369-d710976d266f","Type":"ContainerStarted","Data":"8903a9d4e038d8700cc3fc7b2f1b6306a75bc62254d96b6db5e2b0530951fea2"} Jan 28 17:36:31 crc kubenswrapper[5001]: I0128 17:36:31.284440 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=2.658041821 podStartE2EDuration="15.284415545s" podCreationTimestamp="2026-01-28 17:36:16 +0000 UTC" firstStartedPulling="2026-01-28 17:36:17.523928171 +0000 UTC m=+1223.691716431" lastFinishedPulling="2026-01-28 17:36:30.150301925 +0000 UTC m=+1236.318090155" observedRunningTime="2026-01-28 17:36:31.27521928 +0000 UTC m=+1237.443007510" watchObservedRunningTime="2026-01-28 17:36:31.284415545 +0000 UTC m=+1237.452203785" Jan 28 17:36:34 crc kubenswrapper[5001]: I0128 17:36:34.833799 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:36:34 crc kubenswrapper[5001]: I0128 17:36:34.834131 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.327996 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w"] Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.328499 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" podUID="b17f77be-35c8-4b24-945a-7f9c10a4c78a" containerName="operator" containerID="cri-o://041a5c90a6f37f61bedae903f321c78393c762b6403cecc8ec1eecb64c21bf46" gracePeriod=10 Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.521101 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct"] Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.521609 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" containerName="manager" containerID="cri-o://df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb" gracePeriod=10 Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.888919 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-xmglc"] Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.890027 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.908707 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-szqxl" Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.911217 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-xmglc"] Jan 28 17:36:38 crc kubenswrapper[5001]: I0128 17:36:38.998009 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dpp\" (UniqueName: \"kubernetes.io/projected/515b4b1b-6c0f-4e1c-aada-8a76c2791afe-kube-api-access-r4dpp\") pod \"nova-operator-index-xmglc\" (UID: \"515b4b1b-6c0f-4e1c-aada-8a76c2791afe\") " pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.007545 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.099547 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4csbv\" (UniqueName: \"kubernetes.io/projected/65012584-29ae-4c06-9cd0-e30a86d7ceca-kube-api-access-4csbv\") pod \"65012584-29ae-4c06-9cd0-e30a86d7ceca\" (UID: \"65012584-29ae-4c06-9cd0-e30a86d7ceca\") " Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.099929 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4dpp\" (UniqueName: \"kubernetes.io/projected/515b4b1b-6c0f-4e1c-aada-8a76c2791afe-kube-api-access-r4dpp\") pod \"nova-operator-index-xmglc\" (UID: \"515b4b1b-6c0f-4e1c-aada-8a76c2791afe\") " pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.135174 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65012584-29ae-4c06-9cd0-e30a86d7ceca-kube-api-access-4csbv" (OuterVolumeSpecName: "kube-api-access-4csbv") pod "65012584-29ae-4c06-9cd0-e30a86d7ceca" (UID: "65012584-29ae-4c06-9cd0-e30a86d7ceca"). InnerVolumeSpecName "kube-api-access-4csbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.163074 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4dpp\" (UniqueName: \"kubernetes.io/projected/515b4b1b-6c0f-4e1c-aada-8a76c2791afe-kube-api-access-r4dpp\") pod \"nova-operator-index-xmglc\" (UID: \"515b4b1b-6c0f-4e1c-aada-8a76c2791afe\") " pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.201385 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4csbv\" (UniqueName: \"kubernetes.io/projected/65012584-29ae-4c06-9cd0-e30a86d7ceca-kube-api-access-4csbv\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.301291 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.395757 5001 generic.go:334] "Generic (PLEG): container finished" podID="65012584-29ae-4c06-9cd0-e30a86d7ceca" containerID="df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb" exitCode=0 Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.395834 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" event={"ID":"65012584-29ae-4c06-9cd0-e30a86d7ceca","Type":"ContainerDied","Data":"df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb"} Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.395870 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" event={"ID":"65012584-29ae-4c06-9cd0-e30a86d7ceca","Type":"ContainerDied","Data":"531088ac753040b27b9fb8f5a507783c6b42370446fbd664450c22ff69ecd78e"} Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.395895 5001 scope.go:117] "RemoveContainer" containerID="df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.396043 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.412075 5001 generic.go:334] "Generic (PLEG): container finished" podID="b17f77be-35c8-4b24-945a-7f9c10a4c78a" containerID="041a5c90a6f37f61bedae903f321c78393c762b6403cecc8ec1eecb64c21bf46" exitCode=0 Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.412165 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" event={"ID":"b17f77be-35c8-4b24-945a-7f9c10a4c78a","Type":"ContainerDied","Data":"041a5c90a6f37f61bedae903f321c78393c762b6403cecc8ec1eecb64c21bf46"} Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.448074 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.463432 5001 scope.go:117] "RemoveContainer" containerID="df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb" Jan 28 17:36:39 crc kubenswrapper[5001]: E0128 17:36:39.466543 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb\": container with ID starting with df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb not found: ID does not exist" containerID="df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.466580 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb"} err="failed to get container status \"df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb\": rpc error: code = NotFound desc = could not find container \"df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb\": container with ID starting with df565de91cc7bfcfd678f7bcc3d58bb9767114f289a2a287d71df6cee3cb62eb not found: ID does not exist" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.467430 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct"] Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.489094 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55d49b7dd5-fp7ct"] Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.505035 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg9f\" (UniqueName: \"kubernetes.io/projected/b17f77be-35c8-4b24-945a-7f9c10a4c78a-kube-api-access-kwg9f\") pod \"b17f77be-35c8-4b24-945a-7f9c10a4c78a\" (UID: \"b17f77be-35c8-4b24-945a-7f9c10a4c78a\") " Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.544812 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17f77be-35c8-4b24-945a-7f9c10a4c78a-kube-api-access-kwg9f" (OuterVolumeSpecName: "kube-api-access-kwg9f") pod "b17f77be-35c8-4b24-945a-7f9c10a4c78a" (UID: "b17f77be-35c8-4b24-945a-7f9c10a4c78a"). InnerVolumeSpecName "kube-api-access-kwg9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.606299 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwg9f\" (UniqueName: \"kubernetes.io/projected/b17f77be-35c8-4b24-945a-7f9c10a4c78a-kube-api-access-kwg9f\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:39 crc kubenswrapper[5001]: I0128 17:36:39.888448 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-xmglc"] Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.588319 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xmglc" event={"ID":"515b4b1b-6c0f-4e1c-aada-8a76c2791afe","Type":"ContainerStarted","Data":"f6851757b10908a26c6d4532d581c5d95bf9d90ea53919ba623b7e3e088f21a9"} Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.588894 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xmglc" event={"ID":"515b4b1b-6c0f-4e1c-aada-8a76c2791afe","Type":"ContainerStarted","Data":"fffa127d743a028891a9a81396fc0fc2a4c0cc45cb94bd384d8676521a1deadc"} Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.597815 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.604147 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" path="/var/lib/kubelet/pods/65012584-29ae-4c06-9cd0-e30a86d7ceca/volumes" Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.604859 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w" event={"ID":"b17f77be-35c8-4b24-945a-7f9c10a4c78a","Type":"ContainerDied","Data":"a7b47adc575dedfe3eb58a5de866b18ffdf9d1cd79c9496dac82210e5fe64e61"} Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.604905 5001 scope.go:117] "RemoveContainer" containerID="041a5c90a6f37f61bedae903f321c78393c762b6403cecc8ec1eecb64c21bf46" Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.735202 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-xmglc" podStartSLOduration=2.441535281 podStartE2EDuration="2.735178604s" podCreationTimestamp="2026-01-28 17:36:38 +0000 UTC" firstStartedPulling="2026-01-28 17:36:39.888781018 +0000 UTC m=+1246.056569248" lastFinishedPulling="2026-01-28 17:36:40.182424341 +0000 UTC m=+1246.350212571" observedRunningTime="2026-01-28 17:36:40.731337943 +0000 UTC m=+1246.899126173" watchObservedRunningTime="2026-01-28 17:36:40.735178604 +0000 UTC m=+1246.902966834" Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.760772 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w"] Jan 28 17:36:40 crc kubenswrapper[5001]: I0128 17:36:40.767675 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-679d48b6f-nfb9w"] Jan 28 17:36:41 crc kubenswrapper[5001]: I0128 17:36:41.613921 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-xmglc"] Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.247515 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-lqskc"] Jan 28 17:36:42 crc kubenswrapper[5001]: E0128 17:36:42.247924 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f77be-35c8-4b24-945a-7f9c10a4c78a" containerName="operator" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.247953 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f77be-35c8-4b24-945a-7f9c10a4c78a" containerName="operator" Jan 28 17:36:42 crc kubenswrapper[5001]: E0128 17:36:42.247993 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" containerName="manager" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.248001 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" containerName="manager" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.248185 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="65012584-29ae-4c06-9cd0-e30a86d7ceca" containerName="manager" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.248205 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17f77be-35c8-4b24-945a-7f9c10a4c78a" containerName="operator" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.248764 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.259380 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-lqskc"] Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.346315 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvkq\" (UniqueName: \"kubernetes.io/projected/9780fbb4-beca-4c12-a4ca-b90e06fb59ee-kube-api-access-qvvkq\") pod \"nova-operator-index-lqskc\" (UID: \"9780fbb4-beca-4c12-a4ca-b90e06fb59ee\") " pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.448839 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvkq\" (UniqueName: \"kubernetes.io/projected/9780fbb4-beca-4c12-a4ca-b90e06fb59ee-kube-api-access-qvvkq\") pod \"nova-operator-index-lqskc\" (UID: \"9780fbb4-beca-4c12-a4ca-b90e06fb59ee\") " pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.472473 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvkq\" (UniqueName: \"kubernetes.io/projected/9780fbb4-beca-4c12-a4ca-b90e06fb59ee-kube-api-access-qvvkq\") pod \"nova-operator-index-lqskc\" (UID: \"9780fbb4-beca-4c12-a4ca-b90e06fb59ee\") " pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.571027 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.633572 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17f77be-35c8-4b24-945a-7f9c10a4c78a" path="/var/lib/kubelet/pods/b17f77be-35c8-4b24-945a-7f9c10a4c78a/volumes" Jan 28 17:36:42 crc kubenswrapper[5001]: I0128 17:36:42.638164 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-xmglc" podUID="515b4b1b-6c0f-4e1c-aada-8a76c2791afe" containerName="registry-server" containerID="cri-o://f6851757b10908a26c6d4532d581c5d95bf9d90ea53919ba623b7e3e088f21a9" gracePeriod=2 Jan 28 17:36:43 crc kubenswrapper[5001]: I0128 17:36:43.693732 5001 generic.go:334] "Generic (PLEG): container finished" podID="515b4b1b-6c0f-4e1c-aada-8a76c2791afe" containerID="f6851757b10908a26c6d4532d581c5d95bf9d90ea53919ba623b7e3e088f21a9" exitCode=0 Jan 28 17:36:43 crc kubenswrapper[5001]: I0128 17:36:43.695061 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xmglc" event={"ID":"515b4b1b-6c0f-4e1c-aada-8a76c2791afe","Type":"ContainerDied","Data":"f6851757b10908a26c6d4532d581c5d95bf9d90ea53919ba623b7e3e088f21a9"} Jan 28 17:36:43 crc kubenswrapper[5001]: I0128 17:36:43.912306 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-lqskc"] Jan 28 17:36:43 crc kubenswrapper[5001]: I0128 17:36:43.976232 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.073283 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4dpp\" (UniqueName: \"kubernetes.io/projected/515b4b1b-6c0f-4e1c-aada-8a76c2791afe-kube-api-access-r4dpp\") pod \"515b4b1b-6c0f-4e1c-aada-8a76c2791afe\" (UID: \"515b4b1b-6c0f-4e1c-aada-8a76c2791afe\") " Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.082152 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515b4b1b-6c0f-4e1c-aada-8a76c2791afe-kube-api-access-r4dpp" (OuterVolumeSpecName: "kube-api-access-r4dpp") pod "515b4b1b-6c0f-4e1c-aada-8a76c2791afe" (UID: "515b4b1b-6c0f-4e1c-aada-8a76c2791afe"). InnerVolumeSpecName "kube-api-access-r4dpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.176201 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4dpp\" (UniqueName: \"kubernetes.io/projected/515b4b1b-6c0f-4e1c-aada-8a76c2791afe-kube-api-access-r4dpp\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.704238 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-lqskc" event={"ID":"9780fbb4-beca-4c12-a4ca-b90e06fb59ee","Type":"ContainerStarted","Data":"b12dd855bf1cd823134bca115dfcffb6cd359bfa049eaf321a218f4111d913b9"} Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.704616 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-lqskc" event={"ID":"9780fbb4-beca-4c12-a4ca-b90e06fb59ee","Type":"ContainerStarted","Data":"8ab1a1b38f7641d3cc6390e6ec19bceb136cee3b4a4a8ba49c210559972d02f9"} Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.706550 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xmglc" event={"ID":"515b4b1b-6c0f-4e1c-aada-8a76c2791afe","Type":"ContainerDied","Data":"fffa127d743a028891a9a81396fc0fc2a4c0cc45cb94bd384d8676521a1deadc"} Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.706595 5001 scope.go:117] "RemoveContainer" containerID="f6851757b10908a26c6d4532d581c5d95bf9d90ea53919ba623b7e3e088f21a9" Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.706618 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xmglc" Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.729407 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-lqskc" podStartSLOduration=2.495883663 podStartE2EDuration="2.729381203s" podCreationTimestamp="2026-01-28 17:36:42 +0000 UTC" firstStartedPulling="2026-01-28 17:36:43.924603916 +0000 UTC m=+1250.092392146" lastFinishedPulling="2026-01-28 17:36:44.158101456 +0000 UTC m=+1250.325889686" observedRunningTime="2026-01-28 17:36:44.721213378 +0000 UTC m=+1250.889001628" watchObservedRunningTime="2026-01-28 17:36:44.729381203 +0000 UTC m=+1250.897169433" Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.737833 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-xmglc"] Jan 28 17:36:44 crc kubenswrapper[5001]: I0128 17:36:44.745833 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-xmglc"] Jan 28 17:36:46 crc kubenswrapper[5001]: I0128 17:36:46.603229 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515b4b1b-6c0f-4e1c-aada-8a76c2791afe" path="/var/lib/kubelet/pods/515b4b1b-6c0f-4e1c-aada-8a76c2791afe/volumes" Jan 28 17:36:52 crc kubenswrapper[5001]: I0128 17:36:52.571893 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:52 crc kubenswrapper[5001]: I0128 17:36:52.572490 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:52 crc kubenswrapper[5001]: I0128 17:36:52.604142 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:36:52 crc kubenswrapper[5001]: I0128 17:36:52.796252 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-lqskc" Jan 28 17:37:04 crc kubenswrapper[5001]: I0128 17:37:04.834377 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:37:04 crc kubenswrapper[5001]: I0128 17:37:04.835008 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:37:04 crc kubenswrapper[5001]: I0128 17:37:04.835070 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:37:04 crc kubenswrapper[5001]: I0128 17:37:04.835743 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccc5cb4e707a79570ebb35140ac6a6c78fccff38dad2a94d8294b2b7a155b3e0"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:37:04 crc kubenswrapper[5001]: I0128 17:37:04.835814 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://ccc5cb4e707a79570ebb35140ac6a6c78fccff38dad2a94d8294b2b7a155b3e0" gracePeriod=600 Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.282518 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="ccc5cb4e707a79570ebb35140ac6a6c78fccff38dad2a94d8294b2b7a155b3e0" exitCode=0 Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.282597 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"ccc5cb4e707a79570ebb35140ac6a6c78fccff38dad2a94d8294b2b7a155b3e0"} Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.283011 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411"} Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.283034 5001 scope.go:117] "RemoveContainer" containerID="64823f4f0cdb758673ce03dbcf6563ab253f05451ac978f41bb7afc046d56ae8" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.334262 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w"] Jan 28 17:37:06 crc kubenswrapper[5001]: E0128 17:37:06.334638 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515b4b1b-6c0f-4e1c-aada-8a76c2791afe" containerName="registry-server" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.334657 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="515b4b1b-6c0f-4e1c-aada-8a76c2791afe" containerName="registry-server" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.334842 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="515b4b1b-6c0f-4e1c-aada-8a76c2791afe" containerName="registry-server" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.335931 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.339221 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hrxwm" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.352912 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w"] Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.458428 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-bundle\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.458492 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdprz\" (UniqueName: \"kubernetes.io/projected/768cda87-43fb-49e7-a591-d7f0216e2683-kube-api-access-gdprz\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.458590 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-util\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.560095 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-bundle\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.560162 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdprz\" (UniqueName: \"kubernetes.io/projected/768cda87-43fb-49e7-a591-d7f0216e2683-kube-api-access-gdprz\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.560246 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-util\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.561230 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-util\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.561321 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-bundle\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.593289 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdprz\" (UniqueName: \"kubernetes.io/projected/768cda87-43fb-49e7-a591-d7f0216e2683-kube-api-access-gdprz\") pod \"b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.668170 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:06 crc kubenswrapper[5001]: I0128 17:37:06.880743 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w"] Jan 28 17:37:06 crc kubenswrapper[5001]: W0128 17:37:06.880949 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod768cda87_43fb_49e7_a591_d7f0216e2683.slice/crio-dc9eded3cf30e6f78fbd703ff3a1a33a8632cf06c8423d0ea0eca42e8a4aecb0 WatchSource:0}: Error finding container dc9eded3cf30e6f78fbd703ff3a1a33a8632cf06c8423d0ea0eca42e8a4aecb0: Status 404 returned error can't find the container with id dc9eded3cf30e6f78fbd703ff3a1a33a8632cf06c8423d0ea0eca42e8a4aecb0 Jan 28 17:37:07 crc kubenswrapper[5001]: I0128 17:37:07.290460 5001 generic.go:334] "Generic (PLEG): container finished" podID="768cda87-43fb-49e7-a591-d7f0216e2683" containerID="92fae99069602badb5afba22922632dc95a7a106b48552b04008d144d9b372e6" exitCode=0 Jan 28 17:37:07 crc kubenswrapper[5001]: I0128 17:37:07.290515 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" event={"ID":"768cda87-43fb-49e7-a591-d7f0216e2683","Type":"ContainerDied","Data":"92fae99069602badb5afba22922632dc95a7a106b48552b04008d144d9b372e6"} Jan 28 17:37:07 crc kubenswrapper[5001]: I0128 17:37:07.290882 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" event={"ID":"768cda87-43fb-49e7-a591-d7f0216e2683","Type":"ContainerStarted","Data":"dc9eded3cf30e6f78fbd703ff3a1a33a8632cf06c8423d0ea0eca42e8a4aecb0"} Jan 28 17:37:08 crc kubenswrapper[5001]: I0128 17:37:08.305123 5001 generic.go:334] "Generic (PLEG): container finished" podID="768cda87-43fb-49e7-a591-d7f0216e2683" containerID="5a0223f8c051c9f77cc15c81f430045f26ef4dd8b5f638a9866f089d6b431cc1" exitCode=0 Jan 28 17:37:08 crc kubenswrapper[5001]: I0128 17:37:08.305236 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" event={"ID":"768cda87-43fb-49e7-a591-d7f0216e2683","Type":"ContainerDied","Data":"5a0223f8c051c9f77cc15c81f430045f26ef4dd8b5f638a9866f089d6b431cc1"} Jan 28 17:37:09 crc kubenswrapper[5001]: I0128 17:37:09.315144 5001 generic.go:334] "Generic (PLEG): container finished" podID="768cda87-43fb-49e7-a591-d7f0216e2683" containerID="acd8bb8e2ce61cd8ff3fc559d4afa7ff1b0e7eff2b0af05208a820f2aa6d5e4c" exitCode=0 Jan 28 17:37:09 crc kubenswrapper[5001]: I0128 17:37:09.315166 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" event={"ID":"768cda87-43fb-49e7-a591-d7f0216e2683","Type":"ContainerDied","Data":"acd8bb8e2ce61cd8ff3fc559d4afa7ff1b0e7eff2b0af05208a820f2aa6d5e4c"} Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.627718 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.666883 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-util\") pod \"768cda87-43fb-49e7-a591-d7f0216e2683\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.667394 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdprz\" (UniqueName: \"kubernetes.io/projected/768cda87-43fb-49e7-a591-d7f0216e2683-kube-api-access-gdprz\") pod \"768cda87-43fb-49e7-a591-d7f0216e2683\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.667478 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-bundle\") pod \"768cda87-43fb-49e7-a591-d7f0216e2683\" (UID: \"768cda87-43fb-49e7-a591-d7f0216e2683\") " Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.669136 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-bundle" (OuterVolumeSpecName: "bundle") pod "768cda87-43fb-49e7-a591-d7f0216e2683" (UID: "768cda87-43fb-49e7-a591-d7f0216e2683"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.674219 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768cda87-43fb-49e7-a591-d7f0216e2683-kube-api-access-gdprz" (OuterVolumeSpecName: "kube-api-access-gdprz") pod "768cda87-43fb-49e7-a591-d7f0216e2683" (UID: "768cda87-43fb-49e7-a591-d7f0216e2683"). InnerVolumeSpecName "kube-api-access-gdprz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.681618 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-util" (OuterVolumeSpecName: "util") pod "768cda87-43fb-49e7-a591-d7f0216e2683" (UID: "768cda87-43fb-49e7-a591-d7f0216e2683"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.770495 5001 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-util\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.770754 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdprz\" (UniqueName: \"kubernetes.io/projected/768cda87-43fb-49e7-a591-d7f0216e2683-kube-api-access-gdprz\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:10 crc kubenswrapper[5001]: I0128 17:37:10.770835 5001 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/768cda87-43fb-49e7-a591-d7f0216e2683-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:11 crc kubenswrapper[5001]: I0128 17:37:11.332264 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" event={"ID":"768cda87-43fb-49e7-a591-d7f0216e2683","Type":"ContainerDied","Data":"dc9eded3cf30e6f78fbd703ff3a1a33a8632cf06c8423d0ea0eca42e8a4aecb0"} Jan 28 17:37:11 crc kubenswrapper[5001]: I0128 17:37:11.332660 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc9eded3cf30e6f78fbd703ff3a1a33a8632cf06c8423d0ea0eca42e8a4aecb0" Jan 28 17:37:11 crc kubenswrapper[5001]: I0128 17:37:11.332331 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.817134 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr"] Jan 28 17:37:15 crc kubenswrapper[5001]: E0128 17:37:15.817818 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="util" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.817834 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="util" Jan 28 17:37:15 crc kubenswrapper[5001]: E0128 17:37:15.817869 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="pull" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.817877 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="pull" Jan 28 17:37:15 crc kubenswrapper[5001]: E0128 17:37:15.817903 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="extract" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.817910 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="extract" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.818111 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="768cda87-43fb-49e7-a591-d7f0216e2683" containerName="extract" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.818818 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.821820 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-service-cert" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.821908 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k8xdz" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.837177 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr"] Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.952916 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nvg\" (UniqueName: \"kubernetes.io/projected/17fbd35a-c51b-4e33-b257-e8d10c67054c-kube-api-access-w8nvg\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.953171 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17fbd35a-c51b-4e33-b257-e8d10c67054c-webhook-cert\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:15 crc kubenswrapper[5001]: I0128 17:37:15.953290 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17fbd35a-c51b-4e33-b257-e8d10c67054c-apiservice-cert\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.054272 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17fbd35a-c51b-4e33-b257-e8d10c67054c-webhook-cert\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.054332 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17fbd35a-c51b-4e33-b257-e8d10c67054c-apiservice-cert\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.054371 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nvg\" (UniqueName: \"kubernetes.io/projected/17fbd35a-c51b-4e33-b257-e8d10c67054c-kube-api-access-w8nvg\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.065032 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17fbd35a-c51b-4e33-b257-e8d10c67054c-webhook-cert\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.065213 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17fbd35a-c51b-4e33-b257-e8d10c67054c-apiservice-cert\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.072804 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nvg\" (UniqueName: \"kubernetes.io/projected/17fbd35a-c51b-4e33-b257-e8d10c67054c-kube-api-access-w8nvg\") pod \"nova-operator-controller-manager-5988f4bb96-bmlnr\" (UID: \"17fbd35a-c51b-4e33-b257-e8d10c67054c\") " pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.139033 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:16 crc kubenswrapper[5001]: I0128 17:37:16.635487 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr"] Jan 28 17:37:17 crc kubenswrapper[5001]: I0128 17:37:17.377051 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" event={"ID":"17fbd35a-c51b-4e33-b257-e8d10c67054c","Type":"ContainerStarted","Data":"0150e8aa09ad36278c77fab5ea39bc5f5437b68736c0460f659df104072fac76"} Jan 28 17:37:17 crc kubenswrapper[5001]: I0128 17:37:17.377316 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" event={"ID":"17fbd35a-c51b-4e33-b257-e8d10c67054c","Type":"ContainerStarted","Data":"d30479ee8a54576ffa444516379f75dda5a12fc0326d857a3b2d917e29a14888"} Jan 28 17:37:17 crc kubenswrapper[5001]: I0128 17:37:17.377454 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:17 crc kubenswrapper[5001]: I0128 17:37:17.394309 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" podStartSLOduration=2.394293601 podStartE2EDuration="2.394293601s" podCreationTimestamp="2026-01-28 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:17.392839839 +0000 UTC m=+1283.560628069" watchObservedRunningTime="2026-01-28 17:37:17.394293601 +0000 UTC m=+1283.562081831" Jan 28 17:37:26 crc kubenswrapper[5001]: I0128 17:37:26.144409 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5988f4bb96-bmlnr" Jan 28 17:37:52 crc kubenswrapper[5001]: I0128 17:37:52.842428 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-chbgv"] Jan 28 17:37:52 crc kubenswrapper[5001]: I0128 17:37:52.844082 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:52 crc kubenswrapper[5001]: I0128 17:37:52.853346 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-chbgv"] Jan 28 17:37:52 crc kubenswrapper[5001]: I0128 17:37:52.940295 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-wgwgx"] Jan 28 17:37:52 crc kubenswrapper[5001]: I0128 17:37:52.941406 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:52 crc kubenswrapper[5001]: I0128 17:37:52.948750 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-wgwgx"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.007548 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea91c114-af0b-41fe-a820-09a2dc5c555d-operator-scripts\") pod \"nova-api-db-create-chbgv\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.007643 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5hf\" (UniqueName: \"kubernetes.io/projected/ea91c114-af0b-41fe-a820-09a2dc5c555d-kube-api-access-kw5hf\") pod \"nova-api-db-create-chbgv\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.058610 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-72lfm"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.059943 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.064728 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-2588-account-create-update-ngjsm"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.066670 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.072315 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-72lfm"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.073185 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.079641 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-2588-account-create-update-ngjsm"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.108924 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea91c114-af0b-41fe-a820-09a2dc5c555d-operator-scripts\") pod \"nova-api-db-create-chbgv\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.109105 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5hf\" (UniqueName: \"kubernetes.io/projected/ea91c114-af0b-41fe-a820-09a2dc5c555d-kube-api-access-kw5hf\") pod \"nova-api-db-create-chbgv\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.109156 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7q6t\" (UniqueName: \"kubernetes.io/projected/6e06436b-4009-4d8c-81ec-680b1fc02b76-kube-api-access-z7q6t\") pod \"nova-cell0-db-create-wgwgx\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.109176 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e06436b-4009-4d8c-81ec-680b1fc02b76-operator-scripts\") pod \"nova-cell0-db-create-wgwgx\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.109683 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea91c114-af0b-41fe-a820-09a2dc5c555d-operator-scripts\") pod \"nova-api-db-create-chbgv\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.135300 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5hf\" (UniqueName: \"kubernetes.io/projected/ea91c114-af0b-41fe-a820-09a2dc5c555d-kube-api-access-kw5hf\") pod \"nova-api-db-create-chbgv\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.162046 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.210353 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bwt\" (UniqueName: \"kubernetes.io/projected/c39d7310-6252-4b44-82e3-0a239050e52d-kube-api-access-r6bwt\") pod \"nova-cell1-db-create-72lfm\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.210450 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7q6t\" (UniqueName: \"kubernetes.io/projected/6e06436b-4009-4d8c-81ec-680b1fc02b76-kube-api-access-z7q6t\") pod \"nova-cell0-db-create-wgwgx\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.210482 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e06436b-4009-4d8c-81ec-680b1fc02b76-operator-scripts\") pod \"nova-cell0-db-create-wgwgx\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.210545 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a2bd48-0920-45b1-bb67-816da79f3160-operator-scripts\") pod \"nova-api-2588-account-create-update-ngjsm\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.210608 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmpr\" (UniqueName: \"kubernetes.io/projected/99a2bd48-0920-45b1-bb67-816da79f3160-kube-api-access-wjmpr\") pod \"nova-api-2588-account-create-update-ngjsm\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.210638 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39d7310-6252-4b44-82e3-0a239050e52d-operator-scripts\") pod \"nova-cell1-db-create-72lfm\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.211653 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e06436b-4009-4d8c-81ec-680b1fc02b76-operator-scripts\") pod \"nova-cell0-db-create-wgwgx\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.233542 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7q6t\" (UniqueName: \"kubernetes.io/projected/6e06436b-4009-4d8c-81ec-680b1fc02b76-kube-api-access-z7q6t\") pod \"nova-cell0-db-create-wgwgx\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.247953 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.248926 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.250473 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.265159 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.266405 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.313836 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6bwt\" (UniqueName: \"kubernetes.io/projected/c39d7310-6252-4b44-82e3-0a239050e52d-kube-api-access-r6bwt\") pod \"nova-cell1-db-create-72lfm\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.314211 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a2bd48-0920-45b1-bb67-816da79f3160-operator-scripts\") pod \"nova-api-2588-account-create-update-ngjsm\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.314261 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjmpr\" (UniqueName: \"kubernetes.io/projected/99a2bd48-0920-45b1-bb67-816da79f3160-kube-api-access-wjmpr\") pod \"nova-api-2588-account-create-update-ngjsm\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.314283 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39d7310-6252-4b44-82e3-0a239050e52d-operator-scripts\") pod \"nova-cell1-db-create-72lfm\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.314954 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39d7310-6252-4b44-82e3-0a239050e52d-operator-scripts\") pod \"nova-cell1-db-create-72lfm\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.318713 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a2bd48-0920-45b1-bb67-816da79f3160-operator-scripts\") pod \"nova-api-2588-account-create-update-ngjsm\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.341424 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6bwt\" (UniqueName: \"kubernetes.io/projected/c39d7310-6252-4b44-82e3-0a239050e52d-kube-api-access-r6bwt\") pod \"nova-cell1-db-create-72lfm\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.341457 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjmpr\" (UniqueName: \"kubernetes.io/projected/99a2bd48-0920-45b1-bb67-816da79f3160-kube-api-access-wjmpr\") pod \"nova-api-2588-account-create-update-ngjsm\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.390476 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.403173 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.416746 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rth\" (UniqueName: \"kubernetes.io/projected/5487ccf4-adee-4c40-bef7-75373ee69307-kube-api-access-g4rth\") pod \"nova-cell0-91b9-account-create-update-9fmrb\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.416900 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5487ccf4-adee-4c40-bef7-75373ee69307-operator-scripts\") pod \"nova-cell0-91b9-account-create-update-9fmrb\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.459613 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.461472 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.463716 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.470517 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.518646 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5487ccf4-adee-4c40-bef7-75373ee69307-operator-scripts\") pod \"nova-cell0-91b9-account-create-update-9fmrb\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.518726 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rth\" (UniqueName: \"kubernetes.io/projected/5487ccf4-adee-4c40-bef7-75373ee69307-kube-api-access-g4rth\") pod \"nova-cell0-91b9-account-create-update-9fmrb\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.519720 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5487ccf4-adee-4c40-bef7-75373ee69307-operator-scripts\") pod \"nova-cell0-91b9-account-create-update-9fmrb\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.542864 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rth\" (UniqueName: \"kubernetes.io/projected/5487ccf4-adee-4c40-bef7-75373ee69307-kube-api-access-g4rth\") pod \"nova-cell0-91b9-account-create-update-9fmrb\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.620427 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3732ce-50e1-4e2e-b082-f8b9b984226b-operator-scripts\") pod \"nova-cell1-455c-account-create-update-8fpxl\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.620483 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwvn\" (UniqueName: \"kubernetes.io/projected/9f3732ce-50e1-4e2e-b082-f8b9b984226b-kube-api-access-lrwvn\") pod \"nova-cell1-455c-account-create-update-8fpxl\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.648886 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.661054 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-chbgv"] Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.722920 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3732ce-50e1-4e2e-b082-f8b9b984226b-operator-scripts\") pod \"nova-cell1-455c-account-create-update-8fpxl\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.724396 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwvn\" (UniqueName: \"kubernetes.io/projected/9f3732ce-50e1-4e2e-b082-f8b9b984226b-kube-api-access-lrwvn\") pod \"nova-cell1-455c-account-create-update-8fpxl\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.724282 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3732ce-50e1-4e2e-b082-f8b9b984226b-operator-scripts\") pod \"nova-cell1-455c-account-create-update-8fpxl\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.743745 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwvn\" (UniqueName: \"kubernetes.io/projected/9f3732ce-50e1-4e2e-b082-f8b9b984226b-kube-api-access-lrwvn\") pod \"nova-cell1-455c-account-create-update-8fpxl\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.786647 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.814185 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-wgwgx"] Jan 28 17:37:53 crc kubenswrapper[5001]: W0128 17:37:53.822347 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e06436b_4009_4d8c_81ec_680b1fc02b76.slice/crio-e2ea3541c78a9c970762868c4fff7808c2344a4c7cbf7b243de7fecc61b3ad8e WatchSource:0}: Error finding container e2ea3541c78a9c970762868c4fff7808c2344a4c7cbf7b243de7fecc61b3ad8e: Status 404 returned error can't find the container with id e2ea3541c78a9c970762868c4fff7808c2344a4c7cbf7b243de7fecc61b3ad8e Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.960227 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-72lfm"] Jan 28 17:37:53 crc kubenswrapper[5001]: W0128 17:37:53.969633 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39d7310_6252_4b44_82e3_0a239050e52d.slice/crio-f768fa016cc495810c71410edcf8b1cbc52c77a356bb042e0754f22c44ce9470 WatchSource:0}: Error finding container f768fa016cc495810c71410edcf8b1cbc52c77a356bb042e0754f22c44ce9470: Status 404 returned error can't find the container with id f768fa016cc495810c71410edcf8b1cbc52c77a356bb042e0754f22c44ce9470 Jan 28 17:37:53 crc kubenswrapper[5001]: I0128 17:37:53.986675 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-2588-account-create-update-ngjsm"] Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.211516 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb"] Jan 28 17:37:54 crc kubenswrapper[5001]: W0128 17:37:54.215176 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5487ccf4_adee_4c40_bef7_75373ee69307.slice/crio-6441090eef89194c38a3f8f5cb7170e553b5b1fd42098783064948a56858e065 WatchSource:0}: Error finding container 6441090eef89194c38a3f8f5cb7170e553b5b1fd42098783064948a56858e065: Status 404 returned error can't find the container with id 6441090eef89194c38a3f8f5cb7170e553b5b1fd42098783064948a56858e065 Jan 28 17:37:54 crc kubenswrapper[5001]: W0128 17:37:54.318900 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f3732ce_50e1_4e2e_b082_f8b9b984226b.slice/crio-d207d1baf477dcb95f2eb0ef55bb48655cee89fc020b259cf924c8cc32009059 WatchSource:0}: Error finding container d207d1baf477dcb95f2eb0ef55bb48655cee89fc020b259cf924c8cc32009059: Status 404 returned error can't find the container with id d207d1baf477dcb95f2eb0ef55bb48655cee89fc020b259cf924c8cc32009059 Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.325399 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl"] Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.695965 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" event={"ID":"c39d7310-6252-4b44-82e3-0a239050e52d","Type":"ContainerStarted","Data":"596d436cdb262f5813b078f76c001d300849eae806fff2d5b3f61ddf6f5316e8"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.696341 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" event={"ID":"c39d7310-6252-4b44-82e3-0a239050e52d","Type":"ContainerStarted","Data":"f768fa016cc495810c71410edcf8b1cbc52c77a356bb042e0754f22c44ce9470"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.698218 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" event={"ID":"9f3732ce-50e1-4e2e-b082-f8b9b984226b","Type":"ContainerStarted","Data":"22bd3cdc21bf1f59ca3c11bab6677ce09a684e199f135aa1585e052450c70b29"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.698252 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" event={"ID":"9f3732ce-50e1-4e2e-b082-f8b9b984226b","Type":"ContainerStarted","Data":"d207d1baf477dcb95f2eb0ef55bb48655cee89fc020b259cf924c8cc32009059"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.717841 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-chbgv" event={"ID":"ea91c114-af0b-41fe-a820-09a2dc5c555d","Type":"ContainerStarted","Data":"20342c4048c675e00fde67c78a0369fb621ed4f3ad71cb16eaab444c29cad6df"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.718129 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-chbgv" event={"ID":"ea91c114-af0b-41fe-a820-09a2dc5c555d","Type":"ContainerStarted","Data":"3f40acf6d4fc6165fe7cb44aa0919f3773fec2e57df3428b9362c76a46cf9d77"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.719234 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" event={"ID":"6e06436b-4009-4d8c-81ec-680b1fc02b76","Type":"ContainerStarted","Data":"41a458233c0c1529c5bf1853bc12336b1e4c8cf26d3cb3227daf91edd4d9d097"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.719335 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" event={"ID":"6e06436b-4009-4d8c-81ec-680b1fc02b76","Type":"ContainerStarted","Data":"e2ea3541c78a9c970762868c4fff7808c2344a4c7cbf7b243de7fecc61b3ad8e"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.721049 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" event={"ID":"5487ccf4-adee-4c40-bef7-75373ee69307","Type":"ContainerStarted","Data":"9b8ec41a75ace235e1d0438625bd9867434b2ccdc062686c9ddd22df15f89830"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.721096 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" event={"ID":"5487ccf4-adee-4c40-bef7-75373ee69307","Type":"ContainerStarted","Data":"6441090eef89194c38a3f8f5cb7170e553b5b1fd42098783064948a56858e065"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.723111 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" event={"ID":"99a2bd48-0920-45b1-bb67-816da79f3160","Type":"ContainerStarted","Data":"1b1e8b7430e61de252e231e83a5ffbd87c6171217243cc041388da0f0a883191"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.723153 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" event={"ID":"99a2bd48-0920-45b1-bb67-816da79f3160","Type":"ContainerStarted","Data":"57be441642c00739022594dc5567cbb8de206a4b1c4f5f6422e1d85fe7f2f027"} Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.870161 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" podStartSLOduration=2.870142355 podStartE2EDuration="2.870142355s" podCreationTimestamp="2026-01-28 17:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:54.869104235 +0000 UTC m=+1321.036892495" watchObservedRunningTime="2026-01-28 17:37:54.870142355 +0000 UTC m=+1321.037930585" Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.906064 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" podStartSLOduration=1.906040636 podStartE2EDuration="1.906040636s" podCreationTimestamp="2026-01-28 17:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:54.905393087 +0000 UTC m=+1321.073181307" watchObservedRunningTime="2026-01-28 17:37:54.906040636 +0000 UTC m=+1321.073828866" Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.909879 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" podStartSLOduration=1.909864655 podStartE2EDuration="1.909864655s" podCreationTimestamp="2026-01-28 17:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:54.893544147 +0000 UTC m=+1321.061332377" watchObservedRunningTime="2026-01-28 17:37:54.909864655 +0000 UTC m=+1321.077652885" Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.920862 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" podStartSLOduration=1.9208486599999999 podStartE2EDuration="1.92084866s" podCreationTimestamp="2026-01-28 17:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:54.918207325 +0000 UTC m=+1321.085995555" watchObservedRunningTime="2026-01-28 17:37:54.92084866 +0000 UTC m=+1321.088636890" Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.950365 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" podStartSLOduration=1.950347227 podStartE2EDuration="1.950347227s" podCreationTimestamp="2026-01-28 17:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:54.941858994 +0000 UTC m=+1321.109647234" watchObservedRunningTime="2026-01-28 17:37:54.950347227 +0000 UTC m=+1321.118135457" Jan 28 17:37:54 crc kubenswrapper[5001]: I0128 17:37:54.959096 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-chbgv" podStartSLOduration=2.959071878 podStartE2EDuration="2.959071878s" podCreationTimestamp="2026-01-28 17:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:37:54.9556819 +0000 UTC m=+1321.123470130" watchObservedRunningTime="2026-01-28 17:37:54.959071878 +0000 UTC m=+1321.126860108" Jan 28 17:37:55 crc kubenswrapper[5001]: I0128 17:37:55.731319 5001 generic.go:334] "Generic (PLEG): container finished" podID="ea91c114-af0b-41fe-a820-09a2dc5c555d" containerID="20342c4048c675e00fde67c78a0369fb621ed4f3ad71cb16eaab444c29cad6df" exitCode=0 Jan 28 17:37:55 crc kubenswrapper[5001]: I0128 17:37:55.731438 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-chbgv" event={"ID":"ea91c114-af0b-41fe-a820-09a2dc5c555d","Type":"ContainerDied","Data":"20342c4048c675e00fde67c78a0369fb621ed4f3ad71cb16eaab444c29cad6df"} Jan 28 17:37:55 crc kubenswrapper[5001]: I0128 17:37:55.734618 5001 generic.go:334] "Generic (PLEG): container finished" podID="6e06436b-4009-4d8c-81ec-680b1fc02b76" containerID="41a458233c0c1529c5bf1853bc12336b1e4c8cf26d3cb3227daf91edd4d9d097" exitCode=0 Jan 28 17:37:55 crc kubenswrapper[5001]: I0128 17:37:55.734718 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" event={"ID":"6e06436b-4009-4d8c-81ec-680b1fc02b76","Type":"ContainerDied","Data":"41a458233c0c1529c5bf1853bc12336b1e4c8cf26d3cb3227daf91edd4d9d097"} Jan 28 17:37:55 crc kubenswrapper[5001]: I0128 17:37:55.736098 5001 generic.go:334] "Generic (PLEG): container finished" podID="c39d7310-6252-4b44-82e3-0a239050e52d" containerID="596d436cdb262f5813b078f76c001d300849eae806fff2d5b3f61ddf6f5316e8" exitCode=0 Jan 28 17:37:55 crc kubenswrapper[5001]: I0128 17:37:55.736169 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" event={"ID":"c39d7310-6252-4b44-82e3-0a239050e52d","Type":"ContainerDied","Data":"596d436cdb262f5813b078f76c001d300849eae806fff2d5b3f61ddf6f5316e8"} Jan 28 17:37:56 crc kubenswrapper[5001]: I0128 17:37:56.744441 5001 generic.go:334] "Generic (PLEG): container finished" podID="5487ccf4-adee-4c40-bef7-75373ee69307" containerID="9b8ec41a75ace235e1d0438625bd9867434b2ccdc062686c9ddd22df15f89830" exitCode=0 Jan 28 17:37:56 crc kubenswrapper[5001]: I0128 17:37:56.744545 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" event={"ID":"5487ccf4-adee-4c40-bef7-75373ee69307","Type":"ContainerDied","Data":"9b8ec41a75ace235e1d0438625bd9867434b2ccdc062686c9ddd22df15f89830"} Jan 28 17:37:56 crc kubenswrapper[5001]: I0128 17:37:56.746478 5001 generic.go:334] "Generic (PLEG): container finished" podID="99a2bd48-0920-45b1-bb67-816da79f3160" containerID="1b1e8b7430e61de252e231e83a5ffbd87c6171217243cc041388da0f0a883191" exitCode=0 Jan 28 17:37:56 crc kubenswrapper[5001]: I0128 17:37:56.746541 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" event={"ID":"99a2bd48-0920-45b1-bb67-816da79f3160","Type":"ContainerDied","Data":"1b1e8b7430e61de252e231e83a5ffbd87c6171217243cc041388da0f0a883191"} Jan 28 17:37:56 crc kubenswrapper[5001]: I0128 17:37:56.747900 5001 generic.go:334] "Generic (PLEG): container finished" podID="9f3732ce-50e1-4e2e-b082-f8b9b984226b" containerID="22bd3cdc21bf1f59ca3c11bab6677ce09a684e199f135aa1585e052450c70b29" exitCode=0 Jan 28 17:37:56 crc kubenswrapper[5001]: I0128 17:37:56.748120 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" event={"ID":"9f3732ce-50e1-4e2e-b082-f8b9b984226b","Type":"ContainerDied","Data":"22bd3cdc21bf1f59ca3c11bab6677ce09a684e199f135aa1585e052450c70b29"} Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.304069 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.339879 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.351436 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.401241 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7q6t\" (UniqueName: \"kubernetes.io/projected/6e06436b-4009-4d8c-81ec-680b1fc02b76-kube-api-access-z7q6t\") pod \"6e06436b-4009-4d8c-81ec-680b1fc02b76\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.401410 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e06436b-4009-4d8c-81ec-680b1fc02b76-operator-scripts\") pod \"6e06436b-4009-4d8c-81ec-680b1fc02b76\" (UID: \"6e06436b-4009-4d8c-81ec-680b1fc02b76\") " Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.401829 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e06436b-4009-4d8c-81ec-680b1fc02b76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e06436b-4009-4d8c-81ec-680b1fc02b76" (UID: "6e06436b-4009-4d8c-81ec-680b1fc02b76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.406502 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e06436b-4009-4d8c-81ec-680b1fc02b76-kube-api-access-z7q6t" (OuterVolumeSpecName: "kube-api-access-z7q6t") pod "6e06436b-4009-4d8c-81ec-680b1fc02b76" (UID: "6e06436b-4009-4d8c-81ec-680b1fc02b76"). InnerVolumeSpecName "kube-api-access-z7q6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503063 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6bwt\" (UniqueName: \"kubernetes.io/projected/c39d7310-6252-4b44-82e3-0a239050e52d-kube-api-access-r6bwt\") pod \"c39d7310-6252-4b44-82e3-0a239050e52d\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503237 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39d7310-6252-4b44-82e3-0a239050e52d-operator-scripts\") pod \"c39d7310-6252-4b44-82e3-0a239050e52d\" (UID: \"c39d7310-6252-4b44-82e3-0a239050e52d\") " Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503286 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw5hf\" (UniqueName: \"kubernetes.io/projected/ea91c114-af0b-41fe-a820-09a2dc5c555d-kube-api-access-kw5hf\") pod \"ea91c114-af0b-41fe-a820-09a2dc5c555d\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503330 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea91c114-af0b-41fe-a820-09a2dc5c555d-operator-scripts\") pod \"ea91c114-af0b-41fe-a820-09a2dc5c555d\" (UID: \"ea91c114-af0b-41fe-a820-09a2dc5c555d\") " Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503600 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e06436b-4009-4d8c-81ec-680b1fc02b76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503619 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7q6t\" (UniqueName: \"kubernetes.io/projected/6e06436b-4009-4d8c-81ec-680b1fc02b76-kube-api-access-z7q6t\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503820 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea91c114-af0b-41fe-a820-09a2dc5c555d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea91c114-af0b-41fe-a820-09a2dc5c555d" (UID: "ea91c114-af0b-41fe-a820-09a2dc5c555d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.503889 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39d7310-6252-4b44-82e3-0a239050e52d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c39d7310-6252-4b44-82e3-0a239050e52d" (UID: "c39d7310-6252-4b44-82e3-0a239050e52d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.506212 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39d7310-6252-4b44-82e3-0a239050e52d-kube-api-access-r6bwt" (OuterVolumeSpecName: "kube-api-access-r6bwt") pod "c39d7310-6252-4b44-82e3-0a239050e52d" (UID: "c39d7310-6252-4b44-82e3-0a239050e52d"). InnerVolumeSpecName "kube-api-access-r6bwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.507084 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea91c114-af0b-41fe-a820-09a2dc5c555d-kube-api-access-kw5hf" (OuterVolumeSpecName: "kube-api-access-kw5hf") pod "ea91c114-af0b-41fe-a820-09a2dc5c555d" (UID: "ea91c114-af0b-41fe-a820-09a2dc5c555d"). InnerVolumeSpecName "kube-api-access-kw5hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.604861 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39d7310-6252-4b44-82e3-0a239050e52d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.604906 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw5hf\" (UniqueName: \"kubernetes.io/projected/ea91c114-af0b-41fe-a820-09a2dc5c555d-kube-api-access-kw5hf\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.604921 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea91c114-af0b-41fe-a820-09a2dc5c555d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.604934 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6bwt\" (UniqueName: \"kubernetes.io/projected/c39d7310-6252-4b44-82e3-0a239050e52d-kube-api-access-r6bwt\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.756111 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" event={"ID":"6e06436b-4009-4d8c-81ec-680b1fc02b76","Type":"ContainerDied","Data":"e2ea3541c78a9c970762868c4fff7808c2344a4c7cbf7b243de7fecc61b3ad8e"} Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.756162 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ea3541c78a9c970762868c4fff7808c2344a4c7cbf7b243de7fecc61b3ad8e" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.756219 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-wgwgx" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.762081 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" event={"ID":"c39d7310-6252-4b44-82e3-0a239050e52d","Type":"ContainerDied","Data":"f768fa016cc495810c71410edcf8b1cbc52c77a356bb042e0754f22c44ce9470"} Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.762104 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-72lfm" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.762116 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f768fa016cc495810c71410edcf8b1cbc52c77a356bb042e0754f22c44ce9470" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.763694 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-chbgv" event={"ID":"ea91c114-af0b-41fe-a820-09a2dc5c555d","Type":"ContainerDied","Data":"3f40acf6d4fc6165fe7cb44aa0919f3773fec2e57df3428b9362c76a46cf9d77"} Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.763722 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f40acf6d4fc6165fe7cb44aa0919f3773fec2e57df3428b9362c76a46cf9d77" Jan 28 17:37:57 crc kubenswrapper[5001]: I0128 17:37:57.763758 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-chbgv" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.101285 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.213016 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrwvn\" (UniqueName: \"kubernetes.io/projected/9f3732ce-50e1-4e2e-b082-f8b9b984226b-kube-api-access-lrwvn\") pod \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.213139 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3732ce-50e1-4e2e-b082-f8b9b984226b-operator-scripts\") pod \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\" (UID: \"9f3732ce-50e1-4e2e-b082-f8b9b984226b\") " Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.213960 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3732ce-50e1-4e2e-b082-f8b9b984226b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f3732ce-50e1-4e2e-b082-f8b9b984226b" (UID: "9f3732ce-50e1-4e2e-b082-f8b9b984226b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.219061 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3732ce-50e1-4e2e-b082-f8b9b984226b-kube-api-access-lrwvn" (OuterVolumeSpecName: "kube-api-access-lrwvn") pod "9f3732ce-50e1-4e2e-b082-f8b9b984226b" (UID: "9f3732ce-50e1-4e2e-b082-f8b9b984226b"). InnerVolumeSpecName "kube-api-access-lrwvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.278039 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.283144 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.314738 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3732ce-50e1-4e2e-b082-f8b9b984226b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.314824 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrwvn\" (UniqueName: \"kubernetes.io/projected/9f3732ce-50e1-4e2e-b082-f8b9b984226b-kube-api-access-lrwvn\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.416740 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rth\" (UniqueName: \"kubernetes.io/projected/5487ccf4-adee-4c40-bef7-75373ee69307-kube-api-access-g4rth\") pod \"5487ccf4-adee-4c40-bef7-75373ee69307\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.416815 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5487ccf4-adee-4c40-bef7-75373ee69307-operator-scripts\") pod \"5487ccf4-adee-4c40-bef7-75373ee69307\" (UID: \"5487ccf4-adee-4c40-bef7-75373ee69307\") " Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.416853 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjmpr\" (UniqueName: \"kubernetes.io/projected/99a2bd48-0920-45b1-bb67-816da79f3160-kube-api-access-wjmpr\") pod \"99a2bd48-0920-45b1-bb67-816da79f3160\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.416902 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a2bd48-0920-45b1-bb67-816da79f3160-operator-scripts\") pod \"99a2bd48-0920-45b1-bb67-816da79f3160\" (UID: \"99a2bd48-0920-45b1-bb67-816da79f3160\") " Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.417261 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5487ccf4-adee-4c40-bef7-75373ee69307-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5487ccf4-adee-4c40-bef7-75373ee69307" (UID: "5487ccf4-adee-4c40-bef7-75373ee69307"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.417747 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a2bd48-0920-45b1-bb67-816da79f3160-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99a2bd48-0920-45b1-bb67-816da79f3160" (UID: "99a2bd48-0920-45b1-bb67-816da79f3160"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.419580 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a2bd48-0920-45b1-bb67-816da79f3160-kube-api-access-wjmpr" (OuterVolumeSpecName: "kube-api-access-wjmpr") pod "99a2bd48-0920-45b1-bb67-816da79f3160" (UID: "99a2bd48-0920-45b1-bb67-816da79f3160"). InnerVolumeSpecName "kube-api-access-wjmpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.419683 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5487ccf4-adee-4c40-bef7-75373ee69307-kube-api-access-g4rth" (OuterVolumeSpecName: "kube-api-access-g4rth") pod "5487ccf4-adee-4c40-bef7-75373ee69307" (UID: "5487ccf4-adee-4c40-bef7-75373ee69307"). InnerVolumeSpecName "kube-api-access-g4rth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.518613 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rth\" (UniqueName: \"kubernetes.io/projected/5487ccf4-adee-4c40-bef7-75373ee69307-kube-api-access-g4rth\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.518661 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5487ccf4-adee-4c40-bef7-75373ee69307-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.518674 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjmpr\" (UniqueName: \"kubernetes.io/projected/99a2bd48-0920-45b1-bb67-816da79f3160-kube-api-access-wjmpr\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.518685 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a2bd48-0920-45b1-bb67-816da79f3160-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.773169 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" event={"ID":"5487ccf4-adee-4c40-bef7-75373ee69307","Type":"ContainerDied","Data":"6441090eef89194c38a3f8f5cb7170e553b5b1fd42098783064948a56858e065"} Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.773207 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6441090eef89194c38a3f8f5cb7170e553b5b1fd42098783064948a56858e065" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.773256 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.775903 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" event={"ID":"99a2bd48-0920-45b1-bb67-816da79f3160","Type":"ContainerDied","Data":"57be441642c00739022594dc5567cbb8de206a4b1c4f5f6422e1d85fe7f2f027"} Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.775923 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-2588-account-create-update-ngjsm" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.776002 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57be441642c00739022594dc5567cbb8de206a4b1c4f5f6422e1d85fe7f2f027" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.777911 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" event={"ID":"9f3732ce-50e1-4e2e-b082-f8b9b984226b","Type":"ContainerDied","Data":"d207d1baf477dcb95f2eb0ef55bb48655cee89fc020b259cf924c8cc32009059"} Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.777933 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl" Jan 28 17:37:58 crc kubenswrapper[5001]: I0128 17:37:58.777945 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d207d1baf477dcb95f2eb0ef55bb48655cee89fc020b259cf924c8cc32009059" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.484957 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt"] Jan 28 17:38:03 crc kubenswrapper[5001]: E0128 17:38:03.487105 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5487ccf4-adee-4c40-bef7-75373ee69307" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.487208 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5487ccf4-adee-4c40-bef7-75373ee69307" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: E0128 17:38:03.487291 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea91c114-af0b-41fe-a820-09a2dc5c555d" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.487359 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea91c114-af0b-41fe-a820-09a2dc5c555d" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: E0128 17:38:03.487447 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d7310-6252-4b44-82e3-0a239050e52d" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.487518 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d7310-6252-4b44-82e3-0a239050e52d" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: E0128 17:38:03.487602 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3732ce-50e1-4e2e-b082-f8b9b984226b" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.487681 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3732ce-50e1-4e2e-b082-f8b9b984226b" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: E0128 17:38:03.487771 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a2bd48-0920-45b1-bb67-816da79f3160" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.487846 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a2bd48-0920-45b1-bb67-816da79f3160" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: E0128 17:38:03.488063 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e06436b-4009-4d8c-81ec-680b1fc02b76" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488141 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e06436b-4009-4d8c-81ec-680b1fc02b76" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488431 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d7310-6252-4b44-82e3-0a239050e52d" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488520 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5487ccf4-adee-4c40-bef7-75373ee69307" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488592 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e06436b-4009-4d8c-81ec-680b1fc02b76" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488676 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea91c114-af0b-41fe-a820-09a2dc5c555d" containerName="mariadb-database-create" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488763 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3732ce-50e1-4e2e-b082-f8b9b984226b" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.488847 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a2bd48-0920-45b1-bb67-816da79f3160" containerName="mariadb-account-create-update" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.489750 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.497004 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.497068 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt"] Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.497447 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.499888 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-jdrzf" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.594360 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.594960 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6vn\" (UniqueName: \"kubernetes.io/projected/d2cb1b11-68d6-435d-828c-a3b6138fc903-kube-api-access-kw6vn\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.595137 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.697022 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6vn\" (UniqueName: \"kubernetes.io/projected/d2cb1b11-68d6-435d-828c-a3b6138fc903-kube-api-access-kw6vn\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.697107 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.697162 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.702612 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.703821 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.716654 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6vn\" (UniqueName: \"kubernetes.io/projected/d2cb1b11-68d6-435d-828c-a3b6138fc903-kube-api-access-kw6vn\") pod \"nova-kuttl-cell0-conductor-db-sync-rslkt\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:03 crc kubenswrapper[5001]: I0128 17:38:03.818928 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:04 crc kubenswrapper[5001]: I0128 17:38:04.254253 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt"] Jan 28 17:38:04 crc kubenswrapper[5001]: I0128 17:38:04.828071 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" event={"ID":"d2cb1b11-68d6-435d-828c-a3b6138fc903","Type":"ContainerStarted","Data":"be6d6b64b18802ca7f588937996976bcc7322b836cc7f0621209ca05b095dfd3"} Jan 28 17:38:11 crc kubenswrapper[5001]: I0128 17:38:11.886170 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" event={"ID":"d2cb1b11-68d6-435d-828c-a3b6138fc903","Type":"ContainerStarted","Data":"864741892b7f5fd75b65c6e4079fc862e15f9c9285bf1db9636cf43d72ff2968"} Jan 28 17:38:11 crc kubenswrapper[5001]: I0128 17:38:11.906635 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" podStartSLOduration=2.033256677 podStartE2EDuration="8.906613933s" podCreationTimestamp="2026-01-28 17:38:03 +0000 UTC" firstStartedPulling="2026-01-28 17:38:04.266367135 +0000 UTC m=+1330.434155365" lastFinishedPulling="2026-01-28 17:38:11.139724391 +0000 UTC m=+1337.307512621" observedRunningTime="2026-01-28 17:38:11.900199729 +0000 UTC m=+1338.067987979" watchObservedRunningTime="2026-01-28 17:38:11.906613933 +0000 UTC m=+1338.074402153" Jan 28 17:38:35 crc kubenswrapper[5001]: I0128 17:38:35.932927 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fcjtk"] Jan 28 17:38:35 crc kubenswrapper[5001]: I0128 17:38:35.935064 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:35 crc kubenswrapper[5001]: I0128 17:38:35.944714 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fcjtk"] Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.072556 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw52h\" (UniqueName: \"kubernetes.io/projected/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-kube-api-access-mw52h\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.072650 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-catalog-content\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.072748 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-utilities\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.173948 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-utilities\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.174290 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw52h\" (UniqueName: \"kubernetes.io/projected/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-kube-api-access-mw52h\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.174338 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-catalog-content\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.174585 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-utilities\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.174739 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-catalog-content\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.196531 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw52h\" (UniqueName: \"kubernetes.io/projected/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-kube-api-access-mw52h\") pod \"redhat-operators-fcjtk\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:36 crc kubenswrapper[5001]: I0128 17:38:36.256167 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:37 crc kubenswrapper[5001]: I0128 17:38:37.251779 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fcjtk"] Jan 28 17:38:38 crc kubenswrapper[5001]: I0128 17:38:38.093214 5001 generic.go:334] "Generic (PLEG): container finished" podID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerID="3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a" exitCode=0 Jan 28 17:38:38 crc kubenswrapper[5001]: I0128 17:38:38.093259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerDied","Data":"3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a"} Jan 28 17:38:38 crc kubenswrapper[5001]: I0128 17:38:38.093511 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerStarted","Data":"f3c3f55b4a16d511bb5b4538fd2f95ff98b49a9e3e5158a1e3cd64e7e6f7254f"} Jan 28 17:38:40 crc kubenswrapper[5001]: I0128 17:38:40.111278 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerStarted","Data":"ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5"} Jan 28 17:38:41 crc kubenswrapper[5001]: I0128 17:38:41.121877 5001 generic.go:334] "Generic (PLEG): container finished" podID="d2cb1b11-68d6-435d-828c-a3b6138fc903" containerID="864741892b7f5fd75b65c6e4079fc862e15f9c9285bf1db9636cf43d72ff2968" exitCode=0 Jan 28 17:38:41 crc kubenswrapper[5001]: I0128 17:38:41.121999 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" event={"ID":"d2cb1b11-68d6-435d-828c-a3b6138fc903","Type":"ContainerDied","Data":"864741892b7f5fd75b65c6e4079fc862e15f9c9285bf1db9636cf43d72ff2968"} Jan 28 17:38:41 crc kubenswrapper[5001]: I0128 17:38:41.124198 5001 generic.go:334] "Generic (PLEG): container finished" podID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerID="ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5" exitCode=0 Jan 28 17:38:41 crc kubenswrapper[5001]: I0128 17:38:41.124230 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerDied","Data":"ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5"} Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.133628 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerStarted","Data":"98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a"} Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.164630 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fcjtk" podStartSLOduration=3.707787483 podStartE2EDuration="7.164610174s" podCreationTimestamp="2026-01-28 17:38:35 +0000 UTC" firstStartedPulling="2026-01-28 17:38:38.094760678 +0000 UTC m=+1364.262548908" lastFinishedPulling="2026-01-28 17:38:41.551583369 +0000 UTC m=+1367.719371599" observedRunningTime="2026-01-28 17:38:42.154401901 +0000 UTC m=+1368.322190141" watchObservedRunningTime="2026-01-28 17:38:42.164610174 +0000 UTC m=+1368.332398404" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.439273 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.583755 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-scripts\") pod \"d2cb1b11-68d6-435d-828c-a3b6138fc903\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.583819 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-config-data\") pod \"d2cb1b11-68d6-435d-828c-a3b6138fc903\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.583849 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6vn\" (UniqueName: \"kubernetes.io/projected/d2cb1b11-68d6-435d-828c-a3b6138fc903-kube-api-access-kw6vn\") pod \"d2cb1b11-68d6-435d-828c-a3b6138fc903\" (UID: \"d2cb1b11-68d6-435d-828c-a3b6138fc903\") " Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.589532 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-scripts" (OuterVolumeSpecName: "scripts") pod "d2cb1b11-68d6-435d-828c-a3b6138fc903" (UID: "d2cb1b11-68d6-435d-828c-a3b6138fc903"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.589733 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2cb1b11-68d6-435d-828c-a3b6138fc903-kube-api-access-kw6vn" (OuterVolumeSpecName: "kube-api-access-kw6vn") pod "d2cb1b11-68d6-435d-828c-a3b6138fc903" (UID: "d2cb1b11-68d6-435d-828c-a3b6138fc903"). InnerVolumeSpecName "kube-api-access-kw6vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.607232 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-config-data" (OuterVolumeSpecName: "config-data") pod "d2cb1b11-68d6-435d-828c-a3b6138fc903" (UID: "d2cb1b11-68d6-435d-828c-a3b6138fc903"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.686026 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.686057 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2cb1b11-68d6-435d-828c-a3b6138fc903-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:42 crc kubenswrapper[5001]: I0128 17:38:42.686069 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw6vn\" (UniqueName: \"kubernetes.io/projected/d2cb1b11-68d6-435d-828c-a3b6138fc903-kube-api-access-kw6vn\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.141831 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" event={"ID":"d2cb1b11-68d6-435d-828c-a3b6138fc903","Type":"ContainerDied","Data":"be6d6b64b18802ca7f588937996976bcc7322b836cc7f0621209ca05b095dfd3"} Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.142143 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be6d6b64b18802ca7f588937996976bcc7322b836cc7f0621209ca05b095dfd3" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.141878 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.242084 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:38:43 crc kubenswrapper[5001]: E0128 17:38:43.242488 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cb1b11-68d6-435d-828c-a3b6138fc903" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.242514 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cb1b11-68d6-435d-828c-a3b6138fc903" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.242687 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2cb1b11-68d6-435d-828c-a3b6138fc903" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.244596 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.249047 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-jdrzf" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.249401 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.249572 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.398353 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f483fd4-461b-47b8-9e21-0398c809539c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.398446 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/3f483fd4-461b-47b8-9e21-0398c809539c-kube-api-access-jbfrg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.499806 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/3f483fd4-461b-47b8-9e21-0398c809539c-kube-api-access-jbfrg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.499966 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f483fd4-461b-47b8-9e21-0398c809539c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.504361 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f483fd4-461b-47b8-9e21-0398c809539c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.522842 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/3f483fd4-461b-47b8-9e21-0398c809539c-kube-api-access-jbfrg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.559535 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:43 crc kubenswrapper[5001]: I0128 17:38:43.973330 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:38:43 crc kubenswrapper[5001]: W0128 17:38:43.978370 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f483fd4_461b_47b8_9e21_0398c809539c.slice/crio-38e724486ab6a166e9ce9ee2d91e7763b5bc0c96486d8880adc682a171cfb99a WatchSource:0}: Error finding container 38e724486ab6a166e9ce9ee2d91e7763b5bc0c96486d8880adc682a171cfb99a: Status 404 returned error can't find the container with id 38e724486ab6a166e9ce9ee2d91e7763b5bc0c96486d8880adc682a171cfb99a Jan 28 17:38:44 crc kubenswrapper[5001]: I0128 17:38:44.150940 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3f483fd4-461b-47b8-9e21-0398c809539c","Type":"ContainerStarted","Data":"8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a"} Jan 28 17:38:44 crc kubenswrapper[5001]: I0128 17:38:44.150990 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3f483fd4-461b-47b8-9e21-0398c809539c","Type":"ContainerStarted","Data":"38e724486ab6a166e9ce9ee2d91e7763b5bc0c96486d8880adc682a171cfb99a"} Jan 28 17:38:44 crc kubenswrapper[5001]: I0128 17:38:44.152011 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:44 crc kubenswrapper[5001]: I0128 17:38:44.167683 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=1.167666008 podStartE2EDuration="1.167666008s" podCreationTimestamp="2026-01-28 17:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:38:44.166203456 +0000 UTC m=+1370.333991706" watchObservedRunningTime="2026-01-28 17:38:44.167666008 +0000 UTC m=+1370.335454238" Jan 28 17:38:46 crc kubenswrapper[5001]: I0128 17:38:46.257235 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:46 crc kubenswrapper[5001]: I0128 17:38:46.257726 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:47 crc kubenswrapper[5001]: I0128 17:38:47.298586 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fcjtk" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="registry-server" probeResult="failure" output=< Jan 28 17:38:47 crc kubenswrapper[5001]: timeout: failed to connect service ":50051" within 1s Jan 28 17:38:47 crc kubenswrapper[5001]: > Jan 28 17:38:53 crc kubenswrapper[5001]: I0128 17:38:53.593227 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.130873 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.132178 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.135623 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.136011 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.138409 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.261901 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-config-data\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.262036 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjp7h\" (UniqueName: \"kubernetes.io/projected/81664e15-831b-43f6-af28-95b2f545f731-kube-api-access-qjp7h\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.262087 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-scripts\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.322607 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.324302 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.328520 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.335648 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.363733 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-config-data\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.364264 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjp7h\" (UniqueName: \"kubernetes.io/projected/81664e15-831b-43f6-af28-95b2f545f731-kube-api-access-qjp7h\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.364425 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-scripts\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.369894 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-scripts\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.369910 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-config-data\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.387935 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjp7h\" (UniqueName: \"kubernetes.io/projected/81664e15-831b-43f6-af28-95b2f545f731-kube-api-access-qjp7h\") pod \"nova-kuttl-cell0-cell-mapping-csp9k\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.428076 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.428962 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.430708 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.445973 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.458118 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.466322 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f770500-4b6a-4366-b824-7ff68c9fb513-logs\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.466463 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f770500-4b6a-4366-b824-7ff68c9fb513-config-data\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.466525 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a9bcce-6c84-4758-aca9-5fadb324f63b-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.466579 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gln\" (UniqueName: \"kubernetes.io/projected/7f770500-4b6a-4366-b824-7ff68c9fb513-kube-api-access-48gln\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.466679 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85hk\" (UniqueName: \"kubernetes.io/projected/32a9bcce-6c84-4758-aca9-5fadb324f63b-kube-api-access-l85hk\") pod \"nova-kuttl-scheduler-0\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.533174 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.545654 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.549752 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.563703 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567561 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f770500-4b6a-4366-b824-7ff68c9fb513-logs\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567620 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567647 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f770500-4b6a-4366-b824-7ff68c9fb513-config-data\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567672 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a9bcce-6c84-4758-aca9-5fadb324f63b-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567689 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gln\" (UniqueName: \"kubernetes.io/projected/7f770500-4b6a-4366-b824-7ff68c9fb513-kube-api-access-48gln\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567709 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567759 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85hk\" (UniqueName: \"kubernetes.io/projected/32a9bcce-6c84-4758-aca9-5fadb324f63b-kube-api-access-l85hk\") pod \"nova-kuttl-scheduler-0\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.567791 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgdg\" (UniqueName: \"kubernetes.io/projected/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-kube-api-access-lcgdg\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.568273 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f770500-4b6a-4366-b824-7ff68c9fb513-logs\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.573581 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.573946 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.587938 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a9bcce-6c84-4758-aca9-5fadb324f63b-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.591159 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gln\" (UniqueName: \"kubernetes.io/projected/7f770500-4b6a-4366-b824-7ff68c9fb513-kube-api-access-48gln\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.592705 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f770500-4b6a-4366-b824-7ff68c9fb513-config-data\") pod \"nova-kuttl-api-0\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.599005 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85hk\" (UniqueName: \"kubernetes.io/projected/32a9bcce-6c84-4758-aca9-5fadb324f63b-kube-api-access-l85hk\") pod \"nova-kuttl-scheduler-0\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.640467 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.669934 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgdg\" (UniqueName: \"kubernetes.io/projected/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-kube-api-access-lcgdg\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.670548 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.670648 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.672756 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.682055 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.693320 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgdg\" (UniqueName: \"kubernetes.io/projected/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-kube-api-access-lcgdg\") pod \"nova-kuttl-metadata-0\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.747641 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.764404 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.765764 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.768919 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 17:38:54 crc kubenswrapper[5001]: I0128 17:38:54.773526 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.866661 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.874076 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aeb8343-bfe8-4d4f-80ff-50fb55910691-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.874185 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmpg\" (UniqueName: \"kubernetes.io/projected/9aeb8343-bfe8-4d4f-80ff-50fb55910691-kube-api-access-pbmpg\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.978145 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbmpg\" (UniqueName: \"kubernetes.io/projected/9aeb8343-bfe8-4d4f-80ff-50fb55910691-kube-api-access-pbmpg\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.978228 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aeb8343-bfe8-4d4f-80ff-50fb55910691-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.991415 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aeb8343-bfe8-4d4f-80ff-50fb55910691-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.991545 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k"] Jan 28 17:38:55 crc kubenswrapper[5001]: W0128 17:38:54.998519 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81664e15_831b_43f6_af28_95b2f545f731.slice/crio-3242d5f49ee8c526c9c0a3606a91608cb511147ec09eaa036a6e464ece7667e5 WatchSource:0}: Error finding container 3242d5f49ee8c526c9c0a3606a91608cb511147ec09eaa036a6e464ece7667e5: Status 404 returned error can't find the container with id 3242d5f49ee8c526c9c0a3606a91608cb511147ec09eaa036a6e464ece7667e5 Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:54.999577 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbmpg\" (UniqueName: \"kubernetes.io/projected/9aeb8343-bfe8-4d4f-80ff-50fb55910691-kube-api-access-pbmpg\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.096233 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.173939 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.202084 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p"] Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.206788 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.209747 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.209962 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.211185 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.215485 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p"] Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.241094 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"7f770500-4b6a-4366-b824-7ff68c9fb513","Type":"ContainerStarted","Data":"35e9d616d202f2df38aa744895e936bda9372e18bd8a7031730152113ab31cf5"} Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.245814 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" event={"ID":"81664e15-831b-43f6-af28-95b2f545f731","Type":"ContainerStarted","Data":"a04f03da1cb8d0884a0bc28984ccb8c9152297383f7b20360a835265e6883ea4"} Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.246194 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" event={"ID":"81664e15-831b-43f6-af28-95b2f545f731","Type":"ContainerStarted","Data":"3242d5f49ee8c526c9c0a3606a91608cb511147ec09eaa036a6e464ece7667e5"} Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.267959 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" podStartSLOduration=1.26794048 podStartE2EDuration="1.26794048s" podCreationTimestamp="2026-01-28 17:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:38:55.265128609 +0000 UTC m=+1381.432916859" watchObservedRunningTime="2026-01-28 17:38:55.26794048 +0000 UTC m=+1381.435728720" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.386486 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.386595 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hmz\" (UniqueName: \"kubernetes.io/projected/0ef524d0-e581-4c10-a29e-34b123dbda85-kube-api-access-99hmz\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.387585 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.488924 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.489029 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.489068 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hmz\" (UniqueName: \"kubernetes.io/projected/0ef524d0-e581-4c10-a29e-34b123dbda85-kube-api-access-99hmz\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.494759 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.495220 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.507163 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hmz\" (UniqueName: \"kubernetes.io/projected/0ef524d0-e581-4c10-a29e-34b123dbda85-kube-api-access-99hmz\") pod \"nova-kuttl-cell1-conductor-db-sync-rpk6p\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.542277 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.911079 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.925585 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:38:55 crc kubenswrapper[5001]: I0128 17:38:55.936169 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:38:55 crc kubenswrapper[5001]: W0128 17:38:55.945492 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aeb8343_bfe8_4d4f_80ff_50fb55910691.slice/crio-351b4d424f8069d4de66a8fffc1470f2ee6a4fc7d812ceb810b3b363f0e6ef2b WatchSource:0}: Error finding container 351b4d424f8069d4de66a8fffc1470f2ee6a4fc7d812ceb810b3b363f0e6ef2b: Status 404 returned error can't find the container with id 351b4d424f8069d4de66a8fffc1470f2ee6a4fc7d812ceb810b3b363f0e6ef2b Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.049149 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p"] Jan 28 17:38:56 crc kubenswrapper[5001]: W0128 17:38:56.058341 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef524d0_e581_4c10_a29e_34b123dbda85.slice/crio-bb70895072fa8f75c3eaf315724488da8bbba1201000e606a7cda8e7dfee30fe WatchSource:0}: Error finding container bb70895072fa8f75c3eaf315724488da8bbba1201000e606a7cda8e7dfee30fe: Status 404 returned error can't find the container with id bb70895072fa8f75c3eaf315724488da8bbba1201000e606a7cda8e7dfee30fe Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.252736 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" event={"ID":"0ef524d0-e581-4c10-a29e-34b123dbda85","Type":"ContainerStarted","Data":"bb70895072fa8f75c3eaf315724488da8bbba1201000e606a7cda8e7dfee30fe"} Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.253892 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ebf86c5f-6dd4-4159-b6a2-4f84d0565205","Type":"ContainerStarted","Data":"57cb93127277f865c4327037559c8d36fa9adb7d9a6dd1a5b215b1e88c13691e"} Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.255088 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32a9bcce-6c84-4758-aca9-5fadb324f63b","Type":"ContainerStarted","Data":"cb55bace9a936ebf613edb1eee5a44419e87fd836aeea6c31c31faa81cc8f92d"} Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.256672 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9aeb8343-bfe8-4d4f-80ff-50fb55910691","Type":"ContainerStarted","Data":"351b4d424f8069d4de66a8fffc1470f2ee6a4fc7d812ceb810b3b363f0e6ef2b"} Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.303490 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.344082 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:56 crc kubenswrapper[5001]: I0128 17:38:56.537093 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fcjtk"] Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.267855 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"7f770500-4b6a-4366-b824-7ff68c9fb513","Type":"ContainerStarted","Data":"5bafcb601549868dfff50dc5e87d11c962a03c4e6e1f43f0648f11a614113087"} Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.268185 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"7f770500-4b6a-4366-b824-7ff68c9fb513","Type":"ContainerStarted","Data":"070c976759101cca5fc090126b83ccc8ad6c02b2390f10825f882a3e22f537cc"} Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.270204 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" event={"ID":"0ef524d0-e581-4c10-a29e-34b123dbda85","Type":"ContainerStarted","Data":"51b4d49360b3ecb917a4cd4ce74f9ae094e15c8ac17f60226de6a784bc187fef"} Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.287609 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=1.636164809 podStartE2EDuration="3.287590209s" podCreationTimestamp="2026-01-28 17:38:54 +0000 UTC" firstStartedPulling="2026-01-28 17:38:55.210875142 +0000 UTC m=+1381.378663372" lastFinishedPulling="2026-01-28 17:38:56.862300542 +0000 UTC m=+1383.030088772" observedRunningTime="2026-01-28 17:38:57.286311152 +0000 UTC m=+1383.454099382" watchObservedRunningTime="2026-01-28 17:38:57.287590209 +0000 UTC m=+1383.455378439" Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.292602 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ebf86c5f-6dd4-4159-b6a2-4f84d0565205","Type":"ContainerStarted","Data":"777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356"} Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.292651 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ebf86c5f-6dd4-4159-b6a2-4f84d0565205","Type":"ContainerStarted","Data":"92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb"} Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.306685 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" podStartSLOduration=2.306593805 podStartE2EDuration="2.306593805s" podCreationTimestamp="2026-01-28 17:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:38:57.301367255 +0000 UTC m=+1383.469155485" watchObservedRunningTime="2026-01-28 17:38:57.306593805 +0000 UTC m=+1383.474382035" Jan 28 17:38:57 crc kubenswrapper[5001]: I0128 17:38:57.320562 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.38349961 podStartE2EDuration="3.320541405s" podCreationTimestamp="2026-01-28 17:38:54 +0000 UTC" firstStartedPulling="2026-01-28 17:38:55.926371459 +0000 UTC m=+1382.094159689" lastFinishedPulling="2026-01-28 17:38:56.863413254 +0000 UTC m=+1383.031201484" observedRunningTime="2026-01-28 17:38:57.317849208 +0000 UTC m=+1383.485637448" watchObservedRunningTime="2026-01-28 17:38:57.320541405 +0000 UTC m=+1383.488329635" Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.313325 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fcjtk" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="registry-server" containerID="cri-o://98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a" gracePeriod=2 Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.760598 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.843629 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-utilities\") pod \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.843736 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-catalog-content\") pod \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.843835 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw52h\" (UniqueName: \"kubernetes.io/projected/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-kube-api-access-mw52h\") pod \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\" (UID: \"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2\") " Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.844687 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-utilities" (OuterVolumeSpecName: "utilities") pod "073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" (UID: "073d37d3-c4c4-4b95-b3ab-d4b5da7297c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.849956 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-kube-api-access-mw52h" (OuterVolumeSpecName: "kube-api-access-mw52h") pod "073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" (UID: "073d37d3-c4c4-4b95-b3ab-d4b5da7297c2"). InnerVolumeSpecName "kube-api-access-mw52h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.946362 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw52h\" (UniqueName: \"kubernetes.io/projected/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-kube-api-access-mw52h\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.946399 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:58 crc kubenswrapper[5001]: I0128 17:38:58.976078 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" (UID: "073d37d3-c4c4-4b95-b3ab-d4b5da7297c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.047765 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.321937 5001 generic.go:334] "Generic (PLEG): container finished" podID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerID="98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a" exitCode=0 Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.322166 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjtk" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.322318 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerDied","Data":"98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a"} Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.322354 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjtk" event={"ID":"073d37d3-c4c4-4b95-b3ab-d4b5da7297c2","Type":"ContainerDied","Data":"f3c3f55b4a16d511bb5b4538fd2f95ff98b49a9e3e5158a1e3cd64e7e6f7254f"} Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.322376 5001 scope.go:117] "RemoveContainer" containerID="98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.326964 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32a9bcce-6c84-4758-aca9-5fadb324f63b","Type":"ContainerStarted","Data":"f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f"} Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.328568 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9aeb8343-bfe8-4d4f-80ff-50fb55910691","Type":"ContainerStarted","Data":"7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e"} Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.358920 5001 scope.go:117] "RemoveContainer" containerID="ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.370247 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=3.058328899 podStartE2EDuration="5.370227316s" podCreationTimestamp="2026-01-28 17:38:54 +0000 UTC" firstStartedPulling="2026-01-28 17:38:55.927220573 +0000 UTC m=+1382.095008803" lastFinishedPulling="2026-01-28 17:38:58.23911899 +0000 UTC m=+1384.406907220" observedRunningTime="2026-01-28 17:38:59.354376701 +0000 UTC m=+1385.522164941" watchObservedRunningTime="2026-01-28 17:38:59.370227316 +0000 UTC m=+1385.538015546" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.377012 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fcjtk"] Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.391077 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fcjtk"] Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.398219 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=3.099974644 podStartE2EDuration="5.398197889s" podCreationTimestamp="2026-01-28 17:38:54 +0000 UTC" firstStartedPulling="2026-01-28 17:38:55.947807664 +0000 UTC m=+1382.115595894" lastFinishedPulling="2026-01-28 17:38:58.246030889 +0000 UTC m=+1384.413819139" observedRunningTime="2026-01-28 17:38:59.384507986 +0000 UTC m=+1385.552296216" watchObservedRunningTime="2026-01-28 17:38:59.398197889 +0000 UTC m=+1385.565986119" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.411126 5001 scope.go:117] "RemoveContainer" containerID="3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.427850 5001 scope.go:117] "RemoveContainer" containerID="98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a" Jan 28 17:38:59 crc kubenswrapper[5001]: E0128 17:38:59.428356 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a\": container with ID starting with 98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a not found: ID does not exist" containerID="98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.428393 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a"} err="failed to get container status \"98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a\": rpc error: code = NotFound desc = could not find container \"98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a\": container with ID starting with 98a3b02e117691ebb72799415d7fbf05be23d976f3605c28513d8eaa98231a8a not found: ID does not exist" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.428422 5001 scope.go:117] "RemoveContainer" containerID="ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5" Jan 28 17:38:59 crc kubenswrapper[5001]: E0128 17:38:59.428699 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5\": container with ID starting with ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5 not found: ID does not exist" containerID="ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.428724 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5"} err="failed to get container status \"ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5\": rpc error: code = NotFound desc = could not find container \"ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5\": container with ID starting with ddb4fce535609c1960e341ca803f4a524e216ceb30b35f9ac6a3d9f5bc7501d5 not found: ID does not exist" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.428741 5001 scope.go:117] "RemoveContainer" containerID="3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a" Jan 28 17:38:59 crc kubenswrapper[5001]: E0128 17:38:59.428964 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a\": container with ID starting with 3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a not found: ID does not exist" containerID="3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.429015 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a"} err="failed to get container status \"3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a\": rpc error: code = NotFound desc = could not find container \"3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a\": container with ID starting with 3149fd3fef1cf5ae3b531ba3556c79e89b26939bed127e6f60f58760a980778a not found: ID does not exist" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.748893 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.868214 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:38:59 crc kubenswrapper[5001]: I0128 17:38:59.868267 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:00 crc kubenswrapper[5001]: I0128 17:39:00.097465 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:39:00 crc kubenswrapper[5001]: I0128 17:39:00.607169 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" path="/var/lib/kubelet/pods/073d37d3-c4c4-4b95-b3ab-d4b5da7297c2/volumes" Jan 28 17:39:02 crc kubenswrapper[5001]: I0128 17:39:02.353198 5001 generic.go:334] "Generic (PLEG): container finished" podID="0ef524d0-e581-4c10-a29e-34b123dbda85" containerID="51b4d49360b3ecb917a4cd4ce74f9ae094e15c8ac17f60226de6a784bc187fef" exitCode=0 Jan 28 17:39:02 crc kubenswrapper[5001]: I0128 17:39:02.353251 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" event={"ID":"0ef524d0-e581-4c10-a29e-34b123dbda85","Type":"ContainerDied","Data":"51b4d49360b3ecb917a4cd4ce74f9ae094e15c8ac17f60226de6a784bc187fef"} Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.364498 5001 generic.go:334] "Generic (PLEG): container finished" podID="81664e15-831b-43f6-af28-95b2f545f731" containerID="a04f03da1cb8d0884a0bc28984ccb8c9152297383f7b20360a835265e6883ea4" exitCode=0 Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.364701 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" event={"ID":"81664e15-831b-43f6-af28-95b2f545f731","Type":"ContainerDied","Data":"a04f03da1cb8d0884a0bc28984ccb8c9152297383f7b20360a835265e6883ea4"} Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.694202 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.826209 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-scripts\") pod \"0ef524d0-e581-4c10-a29e-34b123dbda85\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.826606 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99hmz\" (UniqueName: \"kubernetes.io/projected/0ef524d0-e581-4c10-a29e-34b123dbda85-kube-api-access-99hmz\") pod \"0ef524d0-e581-4c10-a29e-34b123dbda85\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.826673 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-config-data\") pod \"0ef524d0-e581-4c10-a29e-34b123dbda85\" (UID: \"0ef524d0-e581-4c10-a29e-34b123dbda85\") " Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.831450 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-scripts" (OuterVolumeSpecName: "scripts") pod "0ef524d0-e581-4c10-a29e-34b123dbda85" (UID: "0ef524d0-e581-4c10-a29e-34b123dbda85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.832599 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef524d0-e581-4c10-a29e-34b123dbda85-kube-api-access-99hmz" (OuterVolumeSpecName: "kube-api-access-99hmz") pod "0ef524d0-e581-4c10-a29e-34b123dbda85" (UID: "0ef524d0-e581-4c10-a29e-34b123dbda85"). InnerVolumeSpecName "kube-api-access-99hmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.849287 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-config-data" (OuterVolumeSpecName: "config-data") pod "0ef524d0-e581-4c10-a29e-34b123dbda85" (UID: "0ef524d0-e581-4c10-a29e-34b123dbda85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.928567 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.928616 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef524d0-e581-4c10-a29e-34b123dbda85-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:03 crc kubenswrapper[5001]: I0128 17:39:03.928629 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99hmz\" (UniqueName: \"kubernetes.io/projected/0ef524d0-e581-4c10-a29e-34b123dbda85-kube-api-access-99hmz\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.376682 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" event={"ID":"0ef524d0-e581-4c10-a29e-34b123dbda85","Type":"ContainerDied","Data":"bb70895072fa8f75c3eaf315724488da8bbba1201000e606a7cda8e7dfee30fe"} Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.376741 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb70895072fa8f75c3eaf315724488da8bbba1201000e606a7cda8e7dfee30fe" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.376802 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437091 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:39:04 crc kubenswrapper[5001]: E0128 17:39:04.437458 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="extract-content" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437476 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="extract-content" Jan 28 17:39:04 crc kubenswrapper[5001]: E0128 17:39:04.437498 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="extract-utilities" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437505 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="extract-utilities" Jan 28 17:39:04 crc kubenswrapper[5001]: E0128 17:39:04.437515 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="registry-server" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437523 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="registry-server" Jan 28 17:39:04 crc kubenswrapper[5001]: E0128 17:39:04.437532 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef524d0-e581-4c10-a29e-34b123dbda85" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437540 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef524d0-e581-4c10-a29e-34b123dbda85" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437710 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="073d37d3-c4c4-4b95-b3ab-d4b5da7297c2" containerName="registry-server" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.437738 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef524d0-e581-4c10-a29e-34b123dbda85" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.438411 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.441273 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.448133 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.537740 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f74f30e-eae5-44b0-b858-08841c899345-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.537841 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48rp\" (UniqueName: \"kubernetes.io/projected/1f74f30e-eae5-44b0-b858-08841c899345-kube-api-access-l48rp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.639549 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f74f30e-eae5-44b0-b858-08841c899345-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.639677 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48rp\" (UniqueName: \"kubernetes.io/projected/1f74f30e-eae5-44b0-b858-08841c899345-kube-api-access-l48rp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.641452 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.641523 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.644792 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f74f30e-eae5-44b0-b858-08841c899345-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.663248 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48rp\" (UniqueName: \"kubernetes.io/projected/1f74f30e-eae5-44b0-b858-08841c899345-kube-api-access-l48rp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.707600 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.748943 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.761869 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.771918 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.842645 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjp7h\" (UniqueName: \"kubernetes.io/projected/81664e15-831b-43f6-af28-95b2f545f731-kube-api-access-qjp7h\") pod \"81664e15-831b-43f6-af28-95b2f545f731\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.842707 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-scripts\") pod \"81664e15-831b-43f6-af28-95b2f545f731\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.842748 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-config-data\") pod \"81664e15-831b-43f6-af28-95b2f545f731\" (UID: \"81664e15-831b-43f6-af28-95b2f545f731\") " Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.846340 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81664e15-831b-43f6-af28-95b2f545f731-kube-api-access-qjp7h" (OuterVolumeSpecName: "kube-api-access-qjp7h") pod "81664e15-831b-43f6-af28-95b2f545f731" (UID: "81664e15-831b-43f6-af28-95b2f545f731"). InnerVolumeSpecName "kube-api-access-qjp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.846430 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-scripts" (OuterVolumeSpecName: "scripts") pod "81664e15-831b-43f6-af28-95b2f545f731" (UID: "81664e15-831b-43f6-af28-95b2f545f731"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.867872 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.867931 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.875778 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-config-data" (OuterVolumeSpecName: "config-data") pod "81664e15-831b-43f6-af28-95b2f545f731" (UID: "81664e15-831b-43f6-af28-95b2f545f731"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.946356 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.946715 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81664e15-831b-43f6-af28-95b2f545f731-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:04 crc kubenswrapper[5001]: I0128 17:39:04.946731 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjp7h\" (UniqueName: \"kubernetes.io/projected/81664e15-831b-43f6-af28-95b2f545f731-kube-api-access-qjp7h\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.097700 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.110337 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.198694 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.386670 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" event={"ID":"81664e15-831b-43f6-af28-95b2f545f731","Type":"ContainerDied","Data":"3242d5f49ee8c526c9c0a3606a91608cb511147ec09eaa036a6e464ece7667e5"} Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.386715 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3242d5f49ee8c526c9c0a3606a91608cb511147ec09eaa036a6e464ece7667e5" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.386794 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.391204 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1f74f30e-eae5-44b0-b858-08841c899345","Type":"ContainerStarted","Data":"fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209"} Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.391255 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1f74f30e-eae5-44b0-b858-08841c899345","Type":"ContainerStarted","Data":"10aa9f8261ff23a68835af2bddceadb14bf4a0d798bd8628f88e718284b7c8ec"} Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.391582 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.426543 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.42652443 podStartE2EDuration="1.42652443s" podCreationTimestamp="2026-01-28 17:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:05.425911652 +0000 UTC m=+1391.593699892" watchObservedRunningTime="2026-01-28 17:39:05.42652443 +0000 UTC m=+1391.594312660" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.427136 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.430562 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.675388 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.675957 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-log" containerID="cri-o://070c976759101cca5fc090126b83ccc8ad6c02b2390f10825f882a3e22f537cc" gracePeriod=30 Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.676061 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-api" containerID="cri-o://5bafcb601549868dfff50dc5e87d11c962a03c4e6e1f43f0648f11a614113087" gracePeriod=30 Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.684039 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.128:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.684073 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.128:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.753467 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.753849 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-log" containerID="cri-o://92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb" gracePeriod=30 Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.753998 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356" gracePeriod=30 Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.759836 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.130:8775/\": EOF" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.764484 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.130:8775/\": EOF" Jan 28 17:39:05 crc kubenswrapper[5001]: I0128 17:39:05.901747 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:06 crc kubenswrapper[5001]: I0128 17:39:06.399904 5001 generic.go:334] "Generic (PLEG): container finished" podID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerID="92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb" exitCode=143 Jan 28 17:39:06 crc kubenswrapper[5001]: I0128 17:39:06.400014 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ebf86c5f-6dd4-4159-b6a2-4f84d0565205","Type":"ContainerDied","Data":"92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb"} Jan 28 17:39:06 crc kubenswrapper[5001]: I0128 17:39:06.402021 5001 generic.go:334] "Generic (PLEG): container finished" podID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerID="070c976759101cca5fc090126b83ccc8ad6c02b2390f10825f882a3e22f537cc" exitCode=143 Jan 28 17:39:06 crc kubenswrapper[5001]: I0128 17:39:06.402152 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"7f770500-4b6a-4366-b824-7ff68c9fb513","Type":"ContainerDied","Data":"070c976759101cca5fc090126b83ccc8ad6c02b2390f10825f882a3e22f537cc"} Jan 28 17:39:07 crc kubenswrapper[5001]: I0128 17:39:07.408609 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="32a9bcce-6c84-4758-aca9-5fadb324f63b" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" gracePeriod=30 Jan 28 17:39:09 crc kubenswrapper[5001]: E0128 17:39:09.749993 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f is running failed: container process not found" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:39:09 crc kubenswrapper[5001]: E0128 17:39:09.750658 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f is running failed: container process not found" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:39:09 crc kubenswrapper[5001]: E0128 17:39:09.750911 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f is running failed: container process not found" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:39:09 crc kubenswrapper[5001]: E0128 17:39:09.750945 5001 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f is running failed: container process not found" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="32a9bcce-6c84-4758-aca9-5fadb324f63b" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:09 crc kubenswrapper[5001]: I0128 17:39:09.911751 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.023377 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a9bcce-6c84-4758-aca9-5fadb324f63b-config-data\") pod \"32a9bcce-6c84-4758-aca9-5fadb324f63b\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.023634 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l85hk\" (UniqueName: \"kubernetes.io/projected/32a9bcce-6c84-4758-aca9-5fadb324f63b-kube-api-access-l85hk\") pod \"32a9bcce-6c84-4758-aca9-5fadb324f63b\" (UID: \"32a9bcce-6c84-4758-aca9-5fadb324f63b\") " Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.028757 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a9bcce-6c84-4758-aca9-5fadb324f63b-kube-api-access-l85hk" (OuterVolumeSpecName: "kube-api-access-l85hk") pod "32a9bcce-6c84-4758-aca9-5fadb324f63b" (UID: "32a9bcce-6c84-4758-aca9-5fadb324f63b"). InnerVolumeSpecName "kube-api-access-l85hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.047168 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a9bcce-6c84-4758-aca9-5fadb324f63b-config-data" (OuterVolumeSpecName: "config-data") pod "32a9bcce-6c84-4758-aca9-5fadb324f63b" (UID: "32a9bcce-6c84-4758-aca9-5fadb324f63b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.125869 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l85hk\" (UniqueName: \"kubernetes.io/projected/32a9bcce-6c84-4758-aca9-5fadb324f63b-kube-api-access-l85hk\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.125911 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a9bcce-6c84-4758-aca9-5fadb324f63b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.389734 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.433607 5001 generic.go:334] "Generic (PLEG): container finished" podID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerID="777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356" exitCode=0 Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.433711 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ebf86c5f-6dd4-4159-b6a2-4f84d0565205","Type":"ContainerDied","Data":"777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356"} Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.433714 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.433776 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ebf86c5f-6dd4-4159-b6a2-4f84d0565205","Type":"ContainerDied","Data":"57cb93127277f865c4327037559c8d36fa9adb7d9a6dd1a5b215b1e88c13691e"} Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.433802 5001 scope.go:117] "RemoveContainer" containerID="777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.439477 5001 generic.go:334] "Generic (PLEG): container finished" podID="32a9bcce-6c84-4758-aca9-5fadb324f63b" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" exitCode=0 Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.439521 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32a9bcce-6c84-4758-aca9-5fadb324f63b","Type":"ContainerDied","Data":"f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f"} Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.439549 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32a9bcce-6c84-4758-aca9-5fadb324f63b","Type":"ContainerDied","Data":"cb55bace9a936ebf613edb1eee5a44419e87fd836aeea6c31c31faa81cc8f92d"} Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.439603 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.465680 5001 scope.go:117] "RemoveContainer" containerID="92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.472960 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.488462 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.496105 5001 scope.go:117] "RemoveContainer" containerID="777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356" Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.496754 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356\": container with ID starting with 777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356 not found: ID does not exist" containerID="777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.496818 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356"} err="failed to get container status \"777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356\": rpc error: code = NotFound desc = could not find container \"777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356\": container with ID starting with 777df691c1f22694f99faf2e1f4e9bbde94b7955175342f17fd97b6ef87c5356 not found: ID does not exist" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.496870 5001 scope.go:117] "RemoveContainer" containerID="92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb" Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.497467 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb\": container with ID starting with 92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb not found: ID does not exist" containerID="92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.497511 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb"} err="failed to get container status \"92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb\": rpc error: code = NotFound desc = could not find container \"92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb\": container with ID starting with 92d2f426d826dc32143d9468cbf50f576a8871ff1fe26d62e6bfc16aa1b2dbfb not found: ID does not exist" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.497543 5001 scope.go:117] "RemoveContainer" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.500546 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.500889 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-metadata" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.500920 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-metadata" Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.500937 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-log" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.500945 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-log" Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.500957 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a9bcce-6c84-4758-aca9-5fadb324f63b" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.500964 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a9bcce-6c84-4758-aca9-5fadb324f63b" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.501004 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81664e15-831b-43f6-af28-95b2f545f731" containerName="nova-manage" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.501013 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="81664e15-831b-43f6-af28-95b2f545f731" containerName="nova-manage" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.501164 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-metadata" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.501177 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a9bcce-6c84-4758-aca9-5fadb324f63b" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.501190 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" containerName="nova-kuttl-metadata-log" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.501197 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="81664e15-831b-43f6-af28-95b2f545f731" containerName="nova-manage" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.501742 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.503655 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.520891 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.528966 5001 scope.go:117] "RemoveContainer" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" Jan 28 17:39:10 crc kubenswrapper[5001]: E0128 17:39:10.529472 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f\": container with ID starting with f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f not found: ID does not exist" containerID="f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.529522 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f"} err="failed to get container status \"f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f\": rpc error: code = NotFound desc = could not find container \"f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f\": container with ID starting with f98966cca85d4d74b77639fde094ca7235248191baf5cc70ef018c0788e6e82f not found: ID does not exist" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.533409 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-config-data\") pod \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.533631 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcgdg\" (UniqueName: \"kubernetes.io/projected/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-kube-api-access-lcgdg\") pod \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.533675 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-logs\") pod \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\" (UID: \"ebf86c5f-6dd4-4159-b6a2-4f84d0565205\") " Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.534645 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-logs" (OuterVolumeSpecName: "logs") pod "ebf86c5f-6dd4-4159-b6a2-4f84d0565205" (UID: "ebf86c5f-6dd4-4159-b6a2-4f84d0565205"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.538498 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-kube-api-access-lcgdg" (OuterVolumeSpecName: "kube-api-access-lcgdg") pod "ebf86c5f-6dd4-4159-b6a2-4f84d0565205" (UID: "ebf86c5f-6dd4-4159-b6a2-4f84d0565205"). InnerVolumeSpecName "kube-api-access-lcgdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.554152 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-config-data" (OuterVolumeSpecName: "config-data") pod "ebf86c5f-6dd4-4159-b6a2-4f84d0565205" (UID: "ebf86c5f-6dd4-4159-b6a2-4f84d0565205"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.604815 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a9bcce-6c84-4758-aca9-5fadb324f63b" path="/var/lib/kubelet/pods/32a9bcce-6c84-4758-aca9-5fadb324f63b/volumes" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.636719 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkd2\" (UniqueName: \"kubernetes.io/projected/3cad2c50-3290-4d03-841c-82c907cef1b9-kube-api-access-5rkd2\") pod \"nova-kuttl-scheduler-0\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.636792 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cad2c50-3290-4d03-841c-82c907cef1b9-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.637196 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcgdg\" (UniqueName: \"kubernetes.io/projected/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-kube-api-access-lcgdg\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.637234 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.637245 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf86c5f-6dd4-4159-b6a2-4f84d0565205-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.741418 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rkd2\" (UniqueName: \"kubernetes.io/projected/3cad2c50-3290-4d03-841c-82c907cef1b9-kube-api-access-5rkd2\") pod \"nova-kuttl-scheduler-0\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.741569 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cad2c50-3290-4d03-841c-82c907cef1b9-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.747156 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cad2c50-3290-4d03-841c-82c907cef1b9-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.763579 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rkd2\" (UniqueName: \"kubernetes.io/projected/3cad2c50-3290-4d03-841c-82c907cef1b9-kube-api-access-5rkd2\") pod \"nova-kuttl-scheduler-0\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.768637 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.781146 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.797658 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.799803 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.803556 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.807092 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.824535 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.945228 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd78dba-3413-48b2-ad2c-05f8e426e064-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.945574 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd78dba-3413-48b2-ad2c-05f8e426e064-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:10 crc kubenswrapper[5001]: I0128 17:39:10.945661 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vr9\" (UniqueName: \"kubernetes.io/projected/6cd78dba-3413-48b2-ad2c-05f8e426e064-kube-api-access-w8vr9\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.047490 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8vr9\" (UniqueName: \"kubernetes.io/projected/6cd78dba-3413-48b2-ad2c-05f8e426e064-kube-api-access-w8vr9\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.047572 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd78dba-3413-48b2-ad2c-05f8e426e064-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.048374 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd78dba-3413-48b2-ad2c-05f8e426e064-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.048726 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd78dba-3413-48b2-ad2c-05f8e426e064-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.053883 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd78dba-3413-48b2-ad2c-05f8e426e064-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.063259 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8vr9\" (UniqueName: \"kubernetes.io/projected/6cd78dba-3413-48b2-ad2c-05f8e426e064-kube-api-access-w8vr9\") pod \"nova-kuttl-metadata-0\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.139621 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.250663 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.452095 5001 generic.go:334] "Generic (PLEG): container finished" podID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerID="5bafcb601549868dfff50dc5e87d11c962a03c4e6e1f43f0648f11a614113087" exitCode=0 Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.452186 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"7f770500-4b6a-4366-b824-7ff68c9fb513","Type":"ContainerDied","Data":"5bafcb601549868dfff50dc5e87d11c962a03c4e6e1f43f0648f11a614113087"} Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.453989 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3cad2c50-3290-4d03-841c-82c907cef1b9","Type":"ContainerStarted","Data":"029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd"} Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.454029 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3cad2c50-3290-4d03-841c-82c907cef1b9","Type":"ContainerStarted","Data":"157ffa28b5ad002dfa5ed3c77b17e1493a27c23c028e06faddc2bb85f13d4d8f"} Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.529681 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.555905 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.5558898509999999 podStartE2EDuration="1.555889851s" podCreationTimestamp="2026-01-28 17:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:11.471352094 +0000 UTC m=+1397.639140324" watchObservedRunningTime="2026-01-28 17:39:11.555889851 +0000 UTC m=+1397.723678081" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.573514 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:11 crc kubenswrapper[5001]: W0128 17:39:11.579733 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cd78dba_3413_48b2_ad2c_05f8e426e064.slice/crio-b2110e6aa889c30c207995ef8d2ebc5c57f153a5eff47629607f59cb1b3da7a2 WatchSource:0}: Error finding container b2110e6aa889c30c207995ef8d2ebc5c57f153a5eff47629607f59cb1b3da7a2: Status 404 returned error can't find the container with id b2110e6aa889c30c207995ef8d2ebc5c57f153a5eff47629607f59cb1b3da7a2 Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.656107 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f770500-4b6a-4366-b824-7ff68c9fb513-config-data\") pod \"7f770500-4b6a-4366-b824-7ff68c9fb513\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.656199 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f770500-4b6a-4366-b824-7ff68c9fb513-logs\") pod \"7f770500-4b6a-4366-b824-7ff68c9fb513\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.656312 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48gln\" (UniqueName: \"kubernetes.io/projected/7f770500-4b6a-4366-b824-7ff68c9fb513-kube-api-access-48gln\") pod \"7f770500-4b6a-4366-b824-7ff68c9fb513\" (UID: \"7f770500-4b6a-4366-b824-7ff68c9fb513\") " Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.656923 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f770500-4b6a-4366-b824-7ff68c9fb513-logs" (OuterVolumeSpecName: "logs") pod "7f770500-4b6a-4366-b824-7ff68c9fb513" (UID: "7f770500-4b6a-4366-b824-7ff68c9fb513"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.657195 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f770500-4b6a-4366-b824-7ff68c9fb513-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.661951 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f770500-4b6a-4366-b824-7ff68c9fb513-kube-api-access-48gln" (OuterVolumeSpecName: "kube-api-access-48gln") pod "7f770500-4b6a-4366-b824-7ff68c9fb513" (UID: "7f770500-4b6a-4366-b824-7ff68c9fb513"). InnerVolumeSpecName "kube-api-access-48gln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.678541 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f770500-4b6a-4366-b824-7ff68c9fb513-config-data" (OuterVolumeSpecName: "config-data") pod "7f770500-4b6a-4366-b824-7ff68c9fb513" (UID: "7f770500-4b6a-4366-b824-7ff68c9fb513"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.758747 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48gln\" (UniqueName: \"kubernetes.io/projected/7f770500-4b6a-4366-b824-7ff68c9fb513-kube-api-access-48gln\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:11 crc kubenswrapper[5001]: I0128 17:39:11.758775 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f770500-4b6a-4366-b824-7ff68c9fb513-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.468842 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6cd78dba-3413-48b2-ad2c-05f8e426e064","Type":"ContainerStarted","Data":"d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e"} Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.469254 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6cd78dba-3413-48b2-ad2c-05f8e426e064","Type":"ContainerStarted","Data":"2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a"} Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.469271 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6cd78dba-3413-48b2-ad2c-05f8e426e064","Type":"ContainerStarted","Data":"b2110e6aa889c30c207995ef8d2ebc5c57f153a5eff47629607f59cb1b3da7a2"} Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.473707 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.473779 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"7f770500-4b6a-4366-b824-7ff68c9fb513","Type":"ContainerDied","Data":"35e9d616d202f2df38aa744895e936bda9372e18bd8a7031730152113ab31cf5"} Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.473833 5001 scope.go:117] "RemoveContainer" containerID="5bafcb601549868dfff50dc5e87d11c962a03c4e6e1f43f0648f11a614113087" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.492602 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.492585807 podStartE2EDuration="2.492585807s" podCreationTimestamp="2026-01-28 17:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:12.489840408 +0000 UTC m=+1398.657628668" watchObservedRunningTime="2026-01-28 17:39:12.492585807 +0000 UTC m=+1398.660374037" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.498011 5001 scope.go:117] "RemoveContainer" containerID="070c976759101cca5fc090126b83ccc8ad6c02b2390f10825f882a3e22f537cc" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.526650 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.535888 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.544056 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:12 crc kubenswrapper[5001]: E0128 17:39:12.544478 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-log" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.544507 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-log" Jan 28 17:39:12 crc kubenswrapper[5001]: E0128 17:39:12.544531 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-api" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.544538 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-api" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.544725 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-log" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.544744 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" containerName="nova-kuttl-api-api" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.545585 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.549380 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.558061 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.604773 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f770500-4b6a-4366-b824-7ff68c9fb513" path="/var/lib/kubelet/pods/7f770500-4b6a-4366-b824-7ff68c9fb513/volumes" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.605365 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf86c5f-6dd4-4159-b6a2-4f84d0565205" path="/var/lib/kubelet/pods/ebf86c5f-6dd4-4159-b6a2-4f84d0565205/volumes" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.676135 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15575aa-305b-4ecc-b7d7-75bfa8285518-logs\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.676220 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15575aa-305b-4ecc-b7d7-75bfa8285518-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.676264 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55nj\" (UniqueName: \"kubernetes.io/projected/a15575aa-305b-4ecc-b7d7-75bfa8285518-kube-api-access-k55nj\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.777829 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15575aa-305b-4ecc-b7d7-75bfa8285518-logs\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.777932 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15575aa-305b-4ecc-b7d7-75bfa8285518-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.777996 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55nj\" (UniqueName: \"kubernetes.io/projected/a15575aa-305b-4ecc-b7d7-75bfa8285518-kube-api-access-k55nj\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.778390 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15575aa-305b-4ecc-b7d7-75bfa8285518-logs\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.781631 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15575aa-305b-4ecc-b7d7-75bfa8285518-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.794789 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55nj\" (UniqueName: \"kubernetes.io/projected/a15575aa-305b-4ecc-b7d7-75bfa8285518-kube-api-access-k55nj\") pod \"nova-kuttl-api-0\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:12 crc kubenswrapper[5001]: I0128 17:39:12.867507 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:13 crc kubenswrapper[5001]: I0128 17:39:13.279356 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:13 crc kubenswrapper[5001]: I0128 17:39:13.493018 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a15575aa-305b-4ecc-b7d7-75bfa8285518","Type":"ContainerStarted","Data":"e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9"} Jan 28 17:39:13 crc kubenswrapper[5001]: I0128 17:39:13.493348 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a15575aa-305b-4ecc-b7d7-75bfa8285518","Type":"ContainerStarted","Data":"1769f21cfe8b7feb7edc2a3303fe74ee373b1adf6b5fbf83052c73cd0a1831b9"} Jan 28 17:39:14 crc kubenswrapper[5001]: I0128 17:39:14.502468 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a15575aa-305b-4ecc-b7d7-75bfa8285518","Type":"ContainerStarted","Data":"beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2"} Jan 28 17:39:14 crc kubenswrapper[5001]: I0128 17:39:14.525631 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.52561593 podStartE2EDuration="2.52561593s" podCreationTimestamp="2026-01-28 17:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:14.520133323 +0000 UTC m=+1400.687921573" watchObservedRunningTime="2026-01-28 17:39:14.52561593 +0000 UTC m=+1400.693404160" Jan 28 17:39:14 crc kubenswrapper[5001]: I0128 17:39:14.784874 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.218645 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz"] Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.220435 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.222362 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.222665 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.245161 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz"] Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.314418 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-scripts\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.314705 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs82j\" (UniqueName: \"kubernetes.io/projected/e8d85522-9147-4b92-b399-ca5f64d299ee-kube-api-access-zs82j\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.314914 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-config-data\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.416469 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-config-data\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.416828 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-scripts\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.416943 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs82j\" (UniqueName: \"kubernetes.io/projected/e8d85522-9147-4b92-b399-ca5f64d299ee-kube-api-access-zs82j\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.423354 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-scripts\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.423815 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-config-data\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.449566 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs82j\" (UniqueName: \"kubernetes.io/projected/e8d85522-9147-4b92-b399-ca5f64d299ee-kube-api-access-zs82j\") pod \"nova-kuttl-cell1-cell-mapping-zs2xz\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.550256 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.824723 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:15 crc kubenswrapper[5001]: W0128 17:39:15.996750 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d85522_9147_4b92_b399_ca5f64d299ee.slice/crio-a7c16e77c4f108f004a7f1c2de29987c345c858ac5610c0f9e2d3e1193b9dad8 WatchSource:0}: Error finding container a7c16e77c4f108f004a7f1c2de29987c345c858ac5610c0f9e2d3e1193b9dad8: Status 404 returned error can't find the container with id a7c16e77c4f108f004a7f1c2de29987c345c858ac5610c0f9e2d3e1193b9dad8 Jan 28 17:39:15 crc kubenswrapper[5001]: I0128 17:39:15.998742 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz"] Jan 28 17:39:16 crc kubenswrapper[5001]: I0128 17:39:16.139921 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:16 crc kubenswrapper[5001]: I0128 17:39:16.139988 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:16 crc kubenswrapper[5001]: I0128 17:39:16.520462 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" event={"ID":"e8d85522-9147-4b92-b399-ca5f64d299ee","Type":"ContainerStarted","Data":"9fc87329a3f9d577f57d6de4eedac3d1915bfc3024d8b7681e1550c0bd07e776"} Jan 28 17:39:16 crc kubenswrapper[5001]: I0128 17:39:16.520817 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" event={"ID":"e8d85522-9147-4b92-b399-ca5f64d299ee","Type":"ContainerStarted","Data":"a7c16e77c4f108f004a7f1c2de29987c345c858ac5610c0f9e2d3e1193b9dad8"} Jan 28 17:39:16 crc kubenswrapper[5001]: I0128 17:39:16.541472 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" podStartSLOduration=1.5414540410000002 podStartE2EDuration="1.541454041s" podCreationTimestamp="2026-01-28 17:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:16.534567894 +0000 UTC m=+1402.702356124" watchObservedRunningTime="2026-01-28 17:39:16.541454041 +0000 UTC m=+1402.709242271" Jan 28 17:39:20 crc kubenswrapper[5001]: I0128 17:39:20.825694 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:20 crc kubenswrapper[5001]: I0128 17:39:20.847404 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:21 crc kubenswrapper[5001]: I0128 17:39:21.140485 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:21 crc kubenswrapper[5001]: I0128 17:39:21.140549 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:21 crc kubenswrapper[5001]: I0128 17:39:21.566664 5001 generic.go:334] "Generic (PLEG): container finished" podID="e8d85522-9147-4b92-b399-ca5f64d299ee" containerID="9fc87329a3f9d577f57d6de4eedac3d1915bfc3024d8b7681e1550c0bd07e776" exitCode=0 Jan 28 17:39:21 crc kubenswrapper[5001]: I0128 17:39:21.566763 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" event={"ID":"e8d85522-9147-4b92-b399-ca5f64d299ee","Type":"ContainerDied","Data":"9fc87329a3f9d577f57d6de4eedac3d1915bfc3024d8b7681e1550c0bd07e776"} Jan 28 17:39:21 crc kubenswrapper[5001]: I0128 17:39:21.593018 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:22 crc kubenswrapper[5001]: I0128 17:39:22.181305 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.135:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:22 crc kubenswrapper[5001]: I0128 17:39:22.222258 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.135:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:22 crc kubenswrapper[5001]: I0128 17:39:22.868864 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:22 crc kubenswrapper[5001]: I0128 17:39:22.869331 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:22 crc kubenswrapper[5001]: I0128 17:39:22.933069 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.035228 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-scripts\") pod \"e8d85522-9147-4b92-b399-ca5f64d299ee\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.036751 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-config-data\") pod \"e8d85522-9147-4b92-b399-ca5f64d299ee\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.036840 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs82j\" (UniqueName: \"kubernetes.io/projected/e8d85522-9147-4b92-b399-ca5f64d299ee-kube-api-access-zs82j\") pod \"e8d85522-9147-4b92-b399-ca5f64d299ee\" (UID: \"e8d85522-9147-4b92-b399-ca5f64d299ee\") " Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.040970 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-scripts" (OuterVolumeSpecName: "scripts") pod "e8d85522-9147-4b92-b399-ca5f64d299ee" (UID: "e8d85522-9147-4b92-b399-ca5f64d299ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.056521 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d85522-9147-4b92-b399-ca5f64d299ee-kube-api-access-zs82j" (OuterVolumeSpecName: "kube-api-access-zs82j") pod "e8d85522-9147-4b92-b399-ca5f64d299ee" (UID: "e8d85522-9147-4b92-b399-ca5f64d299ee"). InnerVolumeSpecName "kube-api-access-zs82j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.057248 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-config-data" (OuterVolumeSpecName: "config-data") pod "e8d85522-9147-4b92-b399-ca5f64d299ee" (UID: "e8d85522-9147-4b92-b399-ca5f64d299ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.139182 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.139233 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d85522-9147-4b92-b399-ca5f64d299ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.139247 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs82j\" (UniqueName: \"kubernetes.io/projected/e8d85522-9147-4b92-b399-ca5f64d299ee-kube-api-access-zs82j\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.589296 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" event={"ID":"e8d85522-9147-4b92-b399-ca5f64d299ee","Type":"ContainerDied","Data":"a7c16e77c4f108f004a7f1c2de29987c345c858ac5610c0f9e2d3e1193b9dad8"} Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.589340 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c16e77c4f108f004a7f1c2de29987c345c858ac5610c0f9e2d3e1193b9dad8" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.589404 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.765123 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.765374 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-log" containerID="cri-o://e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9" gracePeriod=30 Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.765557 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-api" containerID="cri-o://beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2" gracePeriod=30 Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.771256 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": EOF" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.771256 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": EOF" Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.820851 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.821075 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3cad2c50-3290-4d03-841c-82c907cef1b9" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd" gracePeriod=30 Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.857466 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.857763 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-log" containerID="cri-o://2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a" gracePeriod=30 Jan 28 17:39:23 crc kubenswrapper[5001]: I0128 17:39:23.857881 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e" gracePeriod=30 Jan 28 17:39:24 crc kubenswrapper[5001]: I0128 17:39:24.601662 5001 generic.go:334] "Generic (PLEG): container finished" podID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerID="e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9" exitCode=143 Jan 28 17:39:24 crc kubenswrapper[5001]: I0128 17:39:24.603186 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a15575aa-305b-4ecc-b7d7-75bfa8285518","Type":"ContainerDied","Data":"e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9"} Jan 28 17:39:24 crc kubenswrapper[5001]: I0128 17:39:24.604150 5001 generic.go:334] "Generic (PLEG): container finished" podID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerID="2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a" exitCode=143 Jan 28 17:39:24 crc kubenswrapper[5001]: I0128 17:39:24.604197 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6cd78dba-3413-48b2-ad2c-05f8e426e064","Type":"ContainerDied","Data":"2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a"} Jan 28 17:39:25 crc kubenswrapper[5001]: E0128 17:39:25.826917 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:39:25 crc kubenswrapper[5001]: E0128 17:39:25.829203 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:39:25 crc kubenswrapper[5001]: E0128 17:39:25.830627 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:39:25 crc kubenswrapper[5001]: E0128 17:39:25.830686 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3cad2c50-3290-4d03-841c-82c907cef1b9" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.398526 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.521121 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd78dba-3413-48b2-ad2c-05f8e426e064-config-data\") pod \"6cd78dba-3413-48b2-ad2c-05f8e426e064\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.521262 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8vr9\" (UniqueName: \"kubernetes.io/projected/6cd78dba-3413-48b2-ad2c-05f8e426e064-kube-api-access-w8vr9\") pod \"6cd78dba-3413-48b2-ad2c-05f8e426e064\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.521995 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd78dba-3413-48b2-ad2c-05f8e426e064-logs\") pod \"6cd78dba-3413-48b2-ad2c-05f8e426e064\" (UID: \"6cd78dba-3413-48b2-ad2c-05f8e426e064\") " Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.522520 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd78dba-3413-48b2-ad2c-05f8e426e064-logs" (OuterVolumeSpecName: "logs") pod "6cd78dba-3413-48b2-ad2c-05f8e426e064" (UID: "6cd78dba-3413-48b2-ad2c-05f8e426e064"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.525875 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd78dba-3413-48b2-ad2c-05f8e426e064-kube-api-access-w8vr9" (OuterVolumeSpecName: "kube-api-access-w8vr9") pod "6cd78dba-3413-48b2-ad2c-05f8e426e064" (UID: "6cd78dba-3413-48b2-ad2c-05f8e426e064"). InnerVolumeSpecName "kube-api-access-w8vr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.541896 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd78dba-3413-48b2-ad2c-05f8e426e064-config-data" (OuterVolumeSpecName: "config-data") pod "6cd78dba-3413-48b2-ad2c-05f8e426e064" (UID: "6cd78dba-3413-48b2-ad2c-05f8e426e064"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.625133 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cd78dba-3413-48b2-ad2c-05f8e426e064-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.625172 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd78dba-3413-48b2-ad2c-05f8e426e064-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.625188 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8vr9\" (UniqueName: \"kubernetes.io/projected/6cd78dba-3413-48b2-ad2c-05f8e426e064-kube-api-access-w8vr9\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.632685 5001 generic.go:334] "Generic (PLEG): container finished" podID="3cad2c50-3290-4d03-841c-82c907cef1b9" containerID="029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd" exitCode=0 Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.632759 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3cad2c50-3290-4d03-841c-82c907cef1b9","Type":"ContainerDied","Data":"029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd"} Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.637437 5001 generic.go:334] "Generic (PLEG): container finished" podID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerID="d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e" exitCode=0 Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.637501 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6cd78dba-3413-48b2-ad2c-05f8e426e064","Type":"ContainerDied","Data":"d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e"} Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.637534 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6cd78dba-3413-48b2-ad2c-05f8e426e064","Type":"ContainerDied","Data":"b2110e6aa889c30c207995ef8d2ebc5c57f153a5eff47629607f59cb1b3da7a2"} Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.637558 5001 scope.go:117] "RemoveContainer" containerID="d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.637791 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.667472 5001 scope.go:117] "RemoveContainer" containerID="2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.695947 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.713621 5001 scope.go:117] "RemoveContainer" containerID="d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e" Jan 28 17:39:27 crc kubenswrapper[5001]: E0128 17:39:27.714101 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e\": container with ID starting with d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e not found: ID does not exist" containerID="d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.714129 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e"} err="failed to get container status \"d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e\": rpc error: code = NotFound desc = could not find container \"d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e\": container with ID starting with d428a2b6523be8e7375b0981fb787c4e757b0b2f1fded06ebb76b710bd898c5e not found: ID does not exist" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.714148 5001 scope.go:117] "RemoveContainer" containerID="2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a" Jan 28 17:39:27 crc kubenswrapper[5001]: E0128 17:39:27.714315 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a\": container with ID starting with 2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a not found: ID does not exist" containerID="2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.714332 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a"} err="failed to get container status \"2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a\": rpc error: code = NotFound desc = could not find container \"2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a\": container with ID starting with 2f89ed804a525d694ae16f5e5c681a88daf14f029268ab63a646a6d7ebe1ab9a not found: ID does not exist" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.715844 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727033 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:27 crc kubenswrapper[5001]: E0128 17:39:27.727370 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-metadata" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727384 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-metadata" Jan 28 17:39:27 crc kubenswrapper[5001]: E0128 17:39:27.727414 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d85522-9147-4b92-b399-ca5f64d299ee" containerName="nova-manage" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727421 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d85522-9147-4b92-b399-ca5f64d299ee" containerName="nova-manage" Jan 28 17:39:27 crc kubenswrapper[5001]: E0128 17:39:27.727432 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-log" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727438 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-log" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727571 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-metadata" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727585 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d85522-9147-4b92-b399-ca5f64d299ee" containerName="nova-manage" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.727612 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" containerName="nova-kuttl-metadata-log" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.728527 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.735347 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.739020 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.792789 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.827652 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.827704 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn55g\" (UniqueName: \"kubernetes.io/projected/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-kube-api-access-pn55g\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.827733 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.928527 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rkd2\" (UniqueName: \"kubernetes.io/projected/3cad2c50-3290-4d03-841c-82c907cef1b9-kube-api-access-5rkd2\") pod \"3cad2c50-3290-4d03-841c-82c907cef1b9\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.928632 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cad2c50-3290-4d03-841c-82c907cef1b9-config-data\") pod \"3cad2c50-3290-4d03-841c-82c907cef1b9\" (UID: \"3cad2c50-3290-4d03-841c-82c907cef1b9\") " Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.928932 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.928964 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn55g\" (UniqueName: \"kubernetes.io/projected/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-kube-api-access-pn55g\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.929001 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.929361 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.933287 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cad2c50-3290-4d03-841c-82c907cef1b9-kube-api-access-5rkd2" (OuterVolumeSpecName: "kube-api-access-5rkd2") pod "3cad2c50-3290-4d03-841c-82c907cef1b9" (UID: "3cad2c50-3290-4d03-841c-82c907cef1b9"). InnerVolumeSpecName "kube-api-access-5rkd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.933465 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.944745 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn55g\" (UniqueName: \"kubernetes.io/projected/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-kube-api-access-pn55g\") pod \"nova-kuttl-metadata-0\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:27 crc kubenswrapper[5001]: I0128 17:39:27.948205 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cad2c50-3290-4d03-841c-82c907cef1b9-config-data" (OuterVolumeSpecName: "config-data") pod "3cad2c50-3290-4d03-841c-82c907cef1b9" (UID: "3cad2c50-3290-4d03-841c-82c907cef1b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.030692 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rkd2\" (UniqueName: \"kubernetes.io/projected/3cad2c50-3290-4d03-841c-82c907cef1b9-kube-api-access-5rkd2\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.030723 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cad2c50-3290-4d03-841c-82c907cef1b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.047370 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.492705 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.604249 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd78dba-3413-48b2-ad2c-05f8e426e064" path="/var/lib/kubelet/pods/6cd78dba-3413-48b2-ad2c-05f8e426e064/volumes" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.646788 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3cad2c50-3290-4d03-841c-82c907cef1b9","Type":"ContainerDied","Data":"157ffa28b5ad002dfa5ed3c77b17e1493a27c23c028e06faddc2bb85f13d4d8f"} Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.646871 5001 scope.go:117] "RemoveContainer" containerID="029713f9eed3351fbb132348d322596f9f9f83dd25da2272f7225e63969f98fd" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.646823 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.651260 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6c9fdbf4-1b79-451b-90e9-5a437ee517b8","Type":"ContainerStarted","Data":"2915c336b1d201f065c98450a7a9f638395725a2dcfe9b2c12debcf9f823fe03"} Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.667429 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.675895 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.695749 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:28 crc kubenswrapper[5001]: E0128 17:39:28.696194 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cad2c50-3290-4d03-841c-82c907cef1b9" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.696207 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cad2c50-3290-4d03-841c-82c907cef1b9" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.698758 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cad2c50-3290-4d03-841c-82c907cef1b9" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.699807 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.703307 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.711827 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.741030 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52810d1f-ed60-4852-bcc1-88004d24baf6-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.741068 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jgzc\" (UniqueName: \"kubernetes.io/projected/52810d1f-ed60-4852-bcc1-88004d24baf6-kube-api-access-8jgzc\") pod \"nova-kuttl-scheduler-0\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.842820 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52810d1f-ed60-4852-bcc1-88004d24baf6-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.843211 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jgzc\" (UniqueName: \"kubernetes.io/projected/52810d1f-ed60-4852-bcc1-88004d24baf6-kube-api-access-8jgzc\") pod \"nova-kuttl-scheduler-0\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.847691 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52810d1f-ed60-4852-bcc1-88004d24baf6-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:28 crc kubenswrapper[5001]: I0128 17:39:28.859031 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jgzc\" (UniqueName: \"kubernetes.io/projected/52810d1f-ed60-4852-bcc1-88004d24baf6-kube-api-access-8jgzc\") pod \"nova-kuttl-scheduler-0\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.060356 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.476285 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:39:29 crc kubenswrapper[5001]: W0128 17:39:29.490469 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52810d1f_ed60_4852_bcc1_88004d24baf6.slice/crio-a0b05873102799833f5cf58d521353952d4de1d45d0a3a1fa03fd906c2ddc027 WatchSource:0}: Error finding container a0b05873102799833f5cf58d521353952d4de1d45d0a3a1fa03fd906c2ddc027: Status 404 returned error can't find the container with id a0b05873102799833f5cf58d521353952d4de1d45d0a3a1fa03fd906c2ddc027 Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.502827 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.552382 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15575aa-305b-4ecc-b7d7-75bfa8285518-logs\") pod \"a15575aa-305b-4ecc-b7d7-75bfa8285518\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.552451 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15575aa-305b-4ecc-b7d7-75bfa8285518-config-data\") pod \"a15575aa-305b-4ecc-b7d7-75bfa8285518\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.552485 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k55nj\" (UniqueName: \"kubernetes.io/projected/a15575aa-305b-4ecc-b7d7-75bfa8285518-kube-api-access-k55nj\") pod \"a15575aa-305b-4ecc-b7d7-75bfa8285518\" (UID: \"a15575aa-305b-4ecc-b7d7-75bfa8285518\") " Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.553408 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15575aa-305b-4ecc-b7d7-75bfa8285518-logs" (OuterVolumeSpecName: "logs") pod "a15575aa-305b-4ecc-b7d7-75bfa8285518" (UID: "a15575aa-305b-4ecc-b7d7-75bfa8285518"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.555752 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15575aa-305b-4ecc-b7d7-75bfa8285518-kube-api-access-k55nj" (OuterVolumeSpecName: "kube-api-access-k55nj") pod "a15575aa-305b-4ecc-b7d7-75bfa8285518" (UID: "a15575aa-305b-4ecc-b7d7-75bfa8285518"). InnerVolumeSpecName "kube-api-access-k55nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.584629 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15575aa-305b-4ecc-b7d7-75bfa8285518-config-data" (OuterVolumeSpecName: "config-data") pod "a15575aa-305b-4ecc-b7d7-75bfa8285518" (UID: "a15575aa-305b-4ecc-b7d7-75bfa8285518"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.654410 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15575aa-305b-4ecc-b7d7-75bfa8285518-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.654439 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15575aa-305b-4ecc-b7d7-75bfa8285518-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.654450 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k55nj\" (UniqueName: \"kubernetes.io/projected/a15575aa-305b-4ecc-b7d7-75bfa8285518-kube-api-access-k55nj\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.661747 5001 generic.go:334] "Generic (PLEG): container finished" podID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerID="beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2" exitCode=0 Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.661805 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a15575aa-305b-4ecc-b7d7-75bfa8285518","Type":"ContainerDied","Data":"beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2"} Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.661831 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a15575aa-305b-4ecc-b7d7-75bfa8285518","Type":"ContainerDied","Data":"1769f21cfe8b7feb7edc2a3303fe74ee373b1adf6b5fbf83052c73cd0a1831b9"} Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.661848 5001 scope.go:117] "RemoveContainer" containerID="beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.661925 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.668858 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"52810d1f-ed60-4852-bcc1-88004d24baf6","Type":"ContainerStarted","Data":"eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b"} Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.668929 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"52810d1f-ed60-4852-bcc1-88004d24baf6","Type":"ContainerStarted","Data":"a0b05873102799833f5cf58d521353952d4de1d45d0a3a1fa03fd906c2ddc027"} Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.672455 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6c9fdbf4-1b79-451b-90e9-5a437ee517b8","Type":"ContainerStarted","Data":"36cc1c44917cfe711d55bf35e461294e5aff8be185280bc68094b842e25e6236"} Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.672549 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6c9fdbf4-1b79-451b-90e9-5a437ee517b8","Type":"ContainerStarted","Data":"ce6123a504887e59a301819e3e5351759fd9d6b6ba771716defe2d505f461c60"} Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.688184 5001 scope.go:117] "RemoveContainer" containerID="e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.706399 5001 scope.go:117] "RemoveContainer" containerID="beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2" Jan 28 17:39:29 crc kubenswrapper[5001]: E0128 17:39:29.706849 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2\": container with ID starting with beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2 not found: ID does not exist" containerID="beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.706921 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2"} err="failed to get container status \"beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2\": rpc error: code = NotFound desc = could not find container \"beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2\": container with ID starting with beb33035c588a98a133bb6fadd055f491ca69857f51f178e9b7a5902209413f2 not found: ID does not exist" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.706994 5001 scope.go:117] "RemoveContainer" containerID="e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9" Jan 28 17:39:29 crc kubenswrapper[5001]: E0128 17:39:29.707461 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9\": container with ID starting with e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9 not found: ID does not exist" containerID="e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.707501 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9"} err="failed to get container status \"e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9\": rpc error: code = NotFound desc = could not find container \"e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9\": container with ID starting with e851e3af5b5424abd84a438c8200f54ec6f5a64116e305d7a7513308691374b9 not found: ID does not exist" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.729727 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.729703943 podStartE2EDuration="2.729703943s" podCreationTimestamp="2026-01-28 17:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:29.719735047 +0000 UTC m=+1415.887523277" watchObservedRunningTime="2026-01-28 17:39:29.729703943 +0000 UTC m=+1415.897492183" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.734455 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.734442099 podStartE2EDuration="1.734442099s" podCreationTimestamp="2026-01-28 17:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:29.695735858 +0000 UTC m=+1415.863524088" watchObservedRunningTime="2026-01-28 17:39:29.734442099 +0000 UTC m=+1415.902230339" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.766404 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.777190 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.784590 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:29 crc kubenswrapper[5001]: E0128 17:39:29.784944 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-api" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.784961 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-api" Jan 28 17:39:29 crc kubenswrapper[5001]: E0128 17:39:29.785012 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-log" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.785020 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-log" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.785271 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-log" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.785285 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" containerName="nova-kuttl-api-api" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.786117 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.788605 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.794282 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.857098 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-logs\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.857384 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.857450 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64hwg\" (UniqueName: \"kubernetes.io/projected/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-kube-api-access-64hwg\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.959063 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hwg\" (UniqueName: \"kubernetes.io/projected/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-kube-api-access-64hwg\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.959151 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-logs\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.959201 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.959651 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-logs\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.964745 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:29 crc kubenswrapper[5001]: I0128 17:39:29.973035 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hwg\" (UniqueName: \"kubernetes.io/projected/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-kube-api-access-64hwg\") pod \"nova-kuttl-api-0\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:30 crc kubenswrapper[5001]: I0128 17:39:30.110166 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:30 crc kubenswrapper[5001]: I0128 17:39:30.584321 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:39:30 crc kubenswrapper[5001]: I0128 17:39:30.608341 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cad2c50-3290-4d03-841c-82c907cef1b9" path="/var/lib/kubelet/pods/3cad2c50-3290-4d03-841c-82c907cef1b9/volumes" Jan 28 17:39:30 crc kubenswrapper[5001]: I0128 17:39:30.609157 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15575aa-305b-4ecc-b7d7-75bfa8285518" path="/var/lib/kubelet/pods/a15575aa-305b-4ecc-b7d7-75bfa8285518/volumes" Jan 28 17:39:30 crc kubenswrapper[5001]: I0128 17:39:30.695069 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"f93149f4-7bcc-4c27-a4c6-93da8ac7693b","Type":"ContainerStarted","Data":"e1c4f43c6dc9a87bbdf822de6a410e01edb3b4cd0dda28a292c40e7adf030d3f"} Jan 28 17:39:31 crc kubenswrapper[5001]: I0128 17:39:31.705371 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"f93149f4-7bcc-4c27-a4c6-93da8ac7693b","Type":"ContainerStarted","Data":"86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2"} Jan 28 17:39:31 crc kubenswrapper[5001]: I0128 17:39:31.705619 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"f93149f4-7bcc-4c27-a4c6-93da8ac7693b","Type":"ContainerStarted","Data":"6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1"} Jan 28 17:39:31 crc kubenswrapper[5001]: I0128 17:39:31.730965 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.730937665 podStartE2EDuration="2.730937665s" podCreationTimestamp="2026-01-28 17:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:39:31.719420605 +0000 UTC m=+1417.887208835" watchObservedRunningTime="2026-01-28 17:39:31.730937665 +0000 UTC m=+1417.898725895" Jan 28 17:39:33 crc kubenswrapper[5001]: I0128 17:39:33.047875 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:33 crc kubenswrapper[5001]: I0128 17:39:33.048273 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:34 crc kubenswrapper[5001]: I0128 17:39:34.060739 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:34 crc kubenswrapper[5001]: I0128 17:39:34.834676 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:39:34 crc kubenswrapper[5001]: I0128 17:39:34.834733 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:39:38 crc kubenswrapper[5001]: I0128 17:39:38.047868 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:38 crc kubenswrapper[5001]: I0128 17:39:38.048224 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:39 crc kubenswrapper[5001]: I0128 17:39:39.061274 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:39 crc kubenswrapper[5001]: I0128 17:39:39.089555 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:39 crc kubenswrapper[5001]: I0128 17:39:39.131144 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:39 crc kubenswrapper[5001]: I0128 17:39:39.131143 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:39 crc kubenswrapper[5001]: I0128 17:39:39.794134 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:39:40 crc kubenswrapper[5001]: I0128 17:39:40.111324 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:40 crc kubenswrapper[5001]: I0128 17:39:40.111373 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:41 crc kubenswrapper[5001]: I0128 17:39:41.193207 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.140:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:41 crc kubenswrapper[5001]: I0128 17:39:41.193207 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.140:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:39:48 crc kubenswrapper[5001]: I0128 17:39:48.051390 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:48 crc kubenswrapper[5001]: I0128 17:39:48.052220 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:48 crc kubenswrapper[5001]: I0128 17:39:48.055218 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:48 crc kubenswrapper[5001]: I0128 17:39:48.055629 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:39:50 crc kubenswrapper[5001]: I0128 17:39:50.115637 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:50 crc kubenswrapper[5001]: I0128 17:39:50.116295 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:50 crc kubenswrapper[5001]: I0128 17:39:50.118658 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:50 crc kubenswrapper[5001]: I0128 17:39:50.119056 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:50 crc kubenswrapper[5001]: I0128 17:39:50.849931 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:39:50 crc kubenswrapper[5001]: I0128 17:39:50.853005 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:40:04 crc kubenswrapper[5001]: I0128 17:40:04.834888 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:40:04 crc kubenswrapper[5001]: I0128 17:40:04.835531 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:40:34 crc kubenswrapper[5001]: I0128 17:40:34.833706 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:40:34 crc kubenswrapper[5001]: I0128 17:40:34.834268 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:40:34 crc kubenswrapper[5001]: I0128 17:40:34.834316 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:40:34 crc kubenswrapper[5001]: I0128 17:40:34.834892 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:40:34 crc kubenswrapper[5001]: I0128 17:40:34.834948 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411" gracePeriod=600 Jan 28 17:40:34 crc kubenswrapper[5001]: E0128 17:40:34.910349 5001 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de2d052_6f7c_4345_91fa_ba2fc7532251.slice/crio-conmon-9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411.scope\": RecentStats: unable to find data in memory cache]" Jan 28 17:40:35 crc kubenswrapper[5001]: I0128 17:40:35.225928 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411" exitCode=0 Jan 28 17:40:35 crc kubenswrapper[5001]: I0128 17:40:35.226748 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411"} Jan 28 17:40:35 crc kubenswrapper[5001]: I0128 17:40:35.226798 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8"} Jan 28 17:40:35 crc kubenswrapper[5001]: I0128 17:40:35.226821 5001 scope.go:117] "RemoveContainer" containerID="ccc5cb4e707a79570ebb35140ac6a6c78fccff38dad2a94d8294b2b7a155b3e0" Jan 28 17:40:57 crc kubenswrapper[5001]: I0128 17:40:57.662937 5001 scope.go:117] "RemoveContainer" containerID="4e1636acb75e638978dd96f1415a966fdebd6936fba28b60a3ccf524badcf620" Jan 28 17:41:57 crc kubenswrapper[5001]: I0128 17:41:57.717860 5001 scope.go:117] "RemoveContainer" containerID="da5502ebd60c2839eef6cea5a18613d8d970a4e96f86a72b682bcf58b1ac0919" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.304162 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9cnrv"] Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.306771 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.320802 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9cnrv"] Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.457631 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-utilities\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.457698 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-catalog-content\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.457946 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh2tk\" (UniqueName: \"kubernetes.io/projected/55bdbd65-adf7-48a8-86f6-51d613056e8b-kube-api-access-kh2tk\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.560691 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh2tk\" (UniqueName: \"kubernetes.io/projected/55bdbd65-adf7-48a8-86f6-51d613056e8b-kube-api-access-kh2tk\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.561073 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-utilities\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.561244 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-catalog-content\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.561611 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-utilities\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.561632 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-catalog-content\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.589400 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh2tk\" (UniqueName: \"kubernetes.io/projected/55bdbd65-adf7-48a8-86f6-51d613056e8b-kube-api-access-kh2tk\") pod \"certified-operators-9cnrv\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:03 crc kubenswrapper[5001]: I0128 17:42:03.664448 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:04 crc kubenswrapper[5001]: I0128 17:42:04.159077 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9cnrv"] Jan 28 17:42:04 crc kubenswrapper[5001]: I0128 17:42:04.950964 5001 generic.go:334] "Generic (PLEG): container finished" podID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerID="171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4" exitCode=0 Jan 28 17:42:04 crc kubenswrapper[5001]: I0128 17:42:04.951056 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerDied","Data":"171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4"} Jan 28 17:42:04 crc kubenswrapper[5001]: I0128 17:42:04.951380 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerStarted","Data":"bcb9899b796d3530deb8d144da5654415d6eccc921c44572dc966163b01a0264"} Jan 28 17:42:06 crc kubenswrapper[5001]: E0128 17:42:06.736426 5001 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55bdbd65_adf7_48a8_86f6_51d613056e8b.slice/crio-26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 17:42:06 crc kubenswrapper[5001]: I0128 17:42:06.973578 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerStarted","Data":"26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d"} Jan 28 17:42:07 crc kubenswrapper[5001]: I0128 17:42:07.982557 5001 generic.go:334] "Generic (PLEG): container finished" podID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerID="26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d" exitCode=0 Jan 28 17:42:07 crc kubenswrapper[5001]: I0128 17:42:07.982655 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerDied","Data":"26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d"} Jan 28 17:42:08 crc kubenswrapper[5001]: I0128 17:42:08.992004 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerStarted","Data":"f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8"} Jan 28 17:42:09 crc kubenswrapper[5001]: I0128 17:42:09.018363 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9cnrv" podStartSLOduration=2.398890077 podStartE2EDuration="6.018344869s" podCreationTimestamp="2026-01-28 17:42:03 +0000 UTC" firstStartedPulling="2026-01-28 17:42:04.952964901 +0000 UTC m=+1571.120753131" lastFinishedPulling="2026-01-28 17:42:08.572419693 +0000 UTC m=+1574.740207923" observedRunningTime="2026-01-28 17:42:09.012942263 +0000 UTC m=+1575.180730493" watchObservedRunningTime="2026-01-28 17:42:09.018344869 +0000 UTC m=+1575.186133099" Jan 28 17:42:13 crc kubenswrapper[5001]: I0128 17:42:13.665654 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:13 crc kubenswrapper[5001]: I0128 17:42:13.666341 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:13 crc kubenswrapper[5001]: I0128 17:42:13.723109 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:14 crc kubenswrapper[5001]: I0128 17:42:14.105448 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:14 crc kubenswrapper[5001]: I0128 17:42:14.179708 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9cnrv"] Jan 28 17:42:16 crc kubenswrapper[5001]: I0128 17:42:16.049671 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9cnrv" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="registry-server" containerID="cri-o://f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8" gracePeriod=2 Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.059186 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.060483 5001 generic.go:334] "Generic (PLEG): container finished" podID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerID="f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8" exitCode=0 Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.060517 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerDied","Data":"f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8"} Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.060542 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cnrv" event={"ID":"55bdbd65-adf7-48a8-86f6-51d613056e8b","Type":"ContainerDied","Data":"bcb9899b796d3530deb8d144da5654415d6eccc921c44572dc966163b01a0264"} Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.060562 5001 scope.go:117] "RemoveContainer" containerID="f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.085696 5001 scope.go:117] "RemoveContainer" containerID="26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.121384 5001 scope.go:117] "RemoveContainer" containerID="171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.142055 5001 scope.go:117] "RemoveContainer" containerID="f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8" Jan 28 17:42:17 crc kubenswrapper[5001]: E0128 17:42:17.145496 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8\": container with ID starting with f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8 not found: ID does not exist" containerID="f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.145572 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8"} err="failed to get container status \"f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8\": rpc error: code = NotFound desc = could not find container \"f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8\": container with ID starting with f82cf5aab09c106336a388d47f104c571ed5f863c851838d09740d1846f4bcf8 not found: ID does not exist" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.145598 5001 scope.go:117] "RemoveContainer" containerID="26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d" Jan 28 17:42:17 crc kubenswrapper[5001]: E0128 17:42:17.145991 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d\": container with ID starting with 26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d not found: ID does not exist" containerID="26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.146036 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d"} err="failed to get container status \"26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d\": rpc error: code = NotFound desc = could not find container \"26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d\": container with ID starting with 26143c274dd720630873596527f082f9c2c591c5cb1eb723f7d5fc5c5ebc6e0d not found: ID does not exist" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.146069 5001 scope.go:117] "RemoveContainer" containerID="171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4" Jan 28 17:42:17 crc kubenswrapper[5001]: E0128 17:42:17.146514 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4\": container with ID starting with 171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4 not found: ID does not exist" containerID="171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.146542 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4"} err="failed to get container status \"171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4\": rpc error: code = NotFound desc = could not find container \"171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4\": container with ID starting with 171b03b3003071e1a3bf7a823ea39dfb7d79fed0bf6b813e505ef86156a143e4 not found: ID does not exist" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.170453 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-utilities\") pod \"55bdbd65-adf7-48a8-86f6-51d613056e8b\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.170559 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh2tk\" (UniqueName: \"kubernetes.io/projected/55bdbd65-adf7-48a8-86f6-51d613056e8b-kube-api-access-kh2tk\") pod \"55bdbd65-adf7-48a8-86f6-51d613056e8b\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.170669 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-catalog-content\") pod \"55bdbd65-adf7-48a8-86f6-51d613056e8b\" (UID: \"55bdbd65-adf7-48a8-86f6-51d613056e8b\") " Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.171785 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-utilities" (OuterVolumeSpecName: "utilities") pod "55bdbd65-adf7-48a8-86f6-51d613056e8b" (UID: "55bdbd65-adf7-48a8-86f6-51d613056e8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.176562 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55bdbd65-adf7-48a8-86f6-51d613056e8b-kube-api-access-kh2tk" (OuterVolumeSpecName: "kube-api-access-kh2tk") pod "55bdbd65-adf7-48a8-86f6-51d613056e8b" (UID: "55bdbd65-adf7-48a8-86f6-51d613056e8b"). InnerVolumeSpecName "kube-api-access-kh2tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.214144 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55bdbd65-adf7-48a8-86f6-51d613056e8b" (UID: "55bdbd65-adf7-48a8-86f6-51d613056e8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.273155 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.273199 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55bdbd65-adf7-48a8-86f6-51d613056e8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:17 crc kubenswrapper[5001]: I0128 17:42:17.273213 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh2tk\" (UniqueName: \"kubernetes.io/projected/55bdbd65-adf7-48a8-86f6-51d613056e8b-kube-api-access-kh2tk\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:18 crc kubenswrapper[5001]: I0128 17:42:18.069700 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cnrv" Jan 28 17:42:18 crc kubenswrapper[5001]: I0128 17:42:18.103571 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9cnrv"] Jan 28 17:42:18 crc kubenswrapper[5001]: I0128 17:42:18.115046 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9cnrv"] Jan 28 17:42:18 crc kubenswrapper[5001]: I0128 17:42:18.603677 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" path="/var/lib/kubelet/pods/55bdbd65-adf7-48a8-86f6-51d613056e8b/volumes" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.280543 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.289501 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.297694 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-csp9k"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.304642 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-zs2xz"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.457826 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell091b9-account-delete-7hcf4"] Jan 28 17:42:30 crc kubenswrapper[5001]: E0128 17:42:30.458254 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="extract-utilities" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.458273 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="extract-utilities" Jan 28 17:42:30 crc kubenswrapper[5001]: E0128 17:42:30.458296 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="registry-server" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.458303 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="registry-server" Jan 28 17:42:30 crc kubenswrapper[5001]: E0128 17:42:30.458311 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="extract-content" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.458318 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="extract-content" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.458485 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="55bdbd65-adf7-48a8-86f6-51d613056e8b" containerName="registry-server" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.459142 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.482260 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell091b9-account-delete-7hcf4"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.563208 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.563501 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="1f74f30e-eae5-44b0-b858-08841c899345" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209" gracePeriod=30 Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.574756 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.574958 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-log" containerID="cri-o://ce6123a504887e59a301819e3e5351759fd9d6b6ba771716defe2d505f461c60" gracePeriod=30 Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.575115 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://36cc1c44917cfe711d55bf35e461294e5aff8be185280bc68094b842e25e6236" gracePeriod=30 Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.587524 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727tt\" (UniqueName: \"kubernetes.io/projected/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-kube-api-access-727tt\") pod \"novacell091b9-account-delete-7hcf4\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.587577 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-operator-scripts\") pod \"novacell091b9-account-delete-7hcf4\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.591030 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.604782 5001 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-api-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-jdrzf\" not found" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.624642 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81664e15-831b-43f6-af28-95b2f545f731" path="/var/lib/kubelet/pods/81664e15-831b-43f6-af28-95b2f545f731/volumes" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.625293 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d85522-9147-4b92-b399-ca5f64d299ee" path="/var/lib/kubelet/pods/e8d85522-9147-4b92-b399-ca5f64d299ee/volumes" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.625782 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-rpk6p"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.637104 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi2588-account-delete-dnhxc"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.638364 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.658301 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi2588-account-delete-dnhxc"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.673003 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1455c-account-delete-d56r6"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.673967 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.687003 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.693265 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-727tt\" (UniqueName: \"kubernetes.io/projected/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-kube-api-access-727tt\") pod \"novacell091b9-account-delete-7hcf4\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.693299 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-operator-scripts\") pod \"novacell091b9-account-delete-7hcf4\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.693335 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptzd7\" (UniqueName: \"kubernetes.io/projected/c7e18326-2af3-4c01-a883-0aa78a1ca37e-kube-api-access-ptzd7\") pod \"novaapi2588-account-delete-dnhxc\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.693425 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e18326-2af3-4c01-a883-0aa78a1ca37e-operator-scripts\") pod \"novaapi2588-account-delete-dnhxc\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.697205 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-operator-scripts\") pod \"novacell091b9-account-delete-7hcf4\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: E0128 17:42:30.702078 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-api-config-data: secret "nova-kuttl-api-config-data" not found Jan 28 17:42:30 crc kubenswrapper[5001]: E0128 17:42:30.702127 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data podName:f93149f4-7bcc-4c27-a4c6-93da8ac7693b nodeName:}" failed. No retries permitted until 2026-01-28 17:42:31.202110091 +0000 UTC m=+1597.369898321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data") pod "nova-kuttl-api-0" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b") : secret "nova-kuttl-api-config-data" not found Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.729042 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1455c-account-delete-d56r6"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.751580 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-727tt\" (UniqueName: \"kubernetes.io/projected/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-kube-api-access-727tt\") pod \"novacell091b9-account-delete-7hcf4\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.767896 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.778680 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.795460 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-operator-scripts\") pod \"novacell1455c-account-delete-d56r6\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.795517 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2tm\" (UniqueName: \"kubernetes.io/projected/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-kube-api-access-4q2tm\") pod \"novacell1455c-account-delete-d56r6\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.795551 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptzd7\" (UniqueName: \"kubernetes.io/projected/c7e18326-2af3-4c01-a883-0aa78a1ca37e-kube-api-access-ptzd7\") pod \"novaapi2588-account-delete-dnhxc\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.795665 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e18326-2af3-4c01-a883-0aa78a1ca37e-operator-scripts\") pod \"novaapi2588-account-delete-dnhxc\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.801594 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e18326-2af3-4c01-a883-0aa78a1ca37e-operator-scripts\") pod \"novaapi2588-account-delete-dnhxc\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.808089 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rslkt"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.829203 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptzd7\" (UniqueName: \"kubernetes.io/projected/c7e18326-2af3-4c01-a883-0aa78a1ca37e-kube-api-access-ptzd7\") pod \"novaapi2588-account-delete-dnhxc\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.852081 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.853050 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="52810d1f-ed60-4852-bcc1-88004d24baf6" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b" gracePeriod=30 Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.867254 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.867455 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="9aeb8343-bfe8-4d4f-80ff-50fb55910691" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e" gracePeriod=30 Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.878831 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.879081 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="3f483fd4-461b-47b8-9e21-0398c809539c" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a" gracePeriod=30 Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.898272 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-operator-scripts\") pod \"novacell1455c-account-delete-d56r6\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.898332 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q2tm\" (UniqueName: \"kubernetes.io/projected/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-kube-api-access-4q2tm\") pod \"novacell1455c-account-delete-d56r6\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.899760 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-operator-scripts\") pod \"novacell1455c-account-delete-d56r6\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.933663 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q2tm\" (UniqueName: \"kubernetes.io/projected/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-kube-api-access-4q2tm\") pod \"novacell1455c-account-delete-d56r6\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:30 crc kubenswrapper[5001]: I0128 17:42:30.976869 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.030740 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.176517 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6c9fdbf4-1b79-451b-90e9-5a437ee517b8","Type":"ContainerDied","Data":"ce6123a504887e59a301819e3e5351759fd9d6b6ba771716defe2d505f461c60"} Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.176460 5001 generic.go:334] "Generic (PLEG): container finished" podID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerID="ce6123a504887e59a301819e3e5351759fd9d6b6ba771716defe2d505f461c60" exitCode=143 Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.177365 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-log" containerID="cri-o://6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1" gracePeriod=30 Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.178500 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-api" containerID="cri-o://86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2" gracePeriod=30 Jan 28 17:42:31 crc kubenswrapper[5001]: E0128 17:42:31.205680 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-api-config-data: secret "nova-kuttl-api-config-data" not found Jan 28 17:42:31 crc kubenswrapper[5001]: E0128 17:42:31.205784 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data podName:f93149f4-7bcc-4c27-a4c6-93da8ac7693b nodeName:}" failed. No retries permitted until 2026-01-28 17:42:32.205755633 +0000 UTC m=+1598.373543873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data") pod "nova-kuttl-api-0" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b") : secret "nova-kuttl-api-config-data" not found Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.276605 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell091b9-account-delete-7hcf4"] Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.497100 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi2588-account-delete-dnhxc"] Jan 28 17:42:31 crc kubenswrapper[5001]: W0128 17:42:31.524821 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7e18326_2af3_4c01_a883_0aa78a1ca37e.slice/crio-4d08151cd368186ddf328d274a55a9c1ce9f463c86964ce1a7331523f963bb0c WatchSource:0}: Error finding container 4d08151cd368186ddf328d274a55a9c1ce9f463c86964ce1a7331523f963bb0c: Status 404 returned error can't find the container with id 4d08151cd368186ddf328d274a55a9c1ce9f463c86964ce1a7331523f963bb0c Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.624404 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1455c-account-delete-d56r6"] Jan 28 17:42:31 crc kubenswrapper[5001]: I0128 17:42:31.929049 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.016759 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbmpg\" (UniqueName: \"kubernetes.io/projected/9aeb8343-bfe8-4d4f-80ff-50fb55910691-kube-api-access-pbmpg\") pod \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.016918 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aeb8343-bfe8-4d4f-80ff-50fb55910691-config-data\") pod \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\" (UID: \"9aeb8343-bfe8-4d4f-80ff-50fb55910691\") " Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.023706 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aeb8343-bfe8-4d4f-80ff-50fb55910691-kube-api-access-pbmpg" (OuterVolumeSpecName: "kube-api-access-pbmpg") pod "9aeb8343-bfe8-4d4f-80ff-50fb55910691" (UID: "9aeb8343-bfe8-4d4f-80ff-50fb55910691"). InnerVolumeSpecName "kube-api-access-pbmpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.040465 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aeb8343-bfe8-4d4f-80ff-50fb55910691-config-data" (OuterVolumeSpecName: "config-data") pod "9aeb8343-bfe8-4d4f-80ff-50fb55910691" (UID: "9aeb8343-bfe8-4d4f-80ff-50fb55910691"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.118695 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbmpg\" (UniqueName: \"kubernetes.io/projected/9aeb8343-bfe8-4d4f-80ff-50fb55910691-kube-api-access-pbmpg\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.119100 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aeb8343-bfe8-4d4f-80ff-50fb55910691-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.188771 5001 generic.go:334] "Generic (PLEG): container finished" podID="b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" containerID="cfd7385966844cde05395abce577ee859280fbc860cece7f0e975366c1fe3273" exitCode=0 Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.188844 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" event={"ID":"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b","Type":"ContainerDied","Data":"cfd7385966844cde05395abce577ee859280fbc860cece7f0e975366c1fe3273"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.188875 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" event={"ID":"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b","Type":"ContainerStarted","Data":"2fc10ce39fbdea043b46edf35d57e92a5cacb42869bef7709636c9aae150c37c"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.191136 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" event={"ID":"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb","Type":"ContainerStarted","Data":"e27297bdb546fbbcfdf66dd7f6a4608570c116d79fa831e27a155b5f3e9b77bd"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.191185 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" event={"ID":"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb","Type":"ContainerStarted","Data":"ced29fff8386da22aeb7f6f098cc6b6ad7e231f52cd98cf525293838107b254d"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.194851 5001 generic.go:334] "Generic (PLEG): container finished" podID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerID="6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1" exitCode=143 Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.194933 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"f93149f4-7bcc-4c27-a4c6-93da8ac7693b","Type":"ContainerDied","Data":"6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.196600 5001 generic.go:334] "Generic (PLEG): container finished" podID="c7e18326-2af3-4c01-a883-0aa78a1ca37e" containerID="4eb5b5411e5f8966a65f821d3a17bd93860a7ec243228cb0dc5e93952ee008d0" exitCode=0 Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.196640 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" event={"ID":"c7e18326-2af3-4c01-a883-0aa78a1ca37e","Type":"ContainerDied","Data":"4eb5b5411e5f8966a65f821d3a17bd93860a7ec243228cb0dc5e93952ee008d0"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.196657 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" event={"ID":"c7e18326-2af3-4c01-a883-0aa78a1ca37e","Type":"ContainerStarted","Data":"4d08151cd368186ddf328d274a55a9c1ce9f463c86964ce1a7331523f963bb0c"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.198736 5001 generic.go:334] "Generic (PLEG): container finished" podID="9aeb8343-bfe8-4d4f-80ff-50fb55910691" containerID="7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e" exitCode=0 Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.198793 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9aeb8343-bfe8-4d4f-80ff-50fb55910691","Type":"ContainerDied","Data":"7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.198822 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.198856 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9aeb8343-bfe8-4d4f-80ff-50fb55910691","Type":"ContainerDied","Data":"351b4d424f8069d4de66a8fffc1470f2ee6a4fc7d812ceb810b3b363f0e6ef2b"} Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.198881 5001 scope.go:117] "RemoveContainer" containerID="7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e" Jan 28 17:42:32 crc kubenswrapper[5001]: E0128 17:42:32.220620 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-api-config-data: secret "nova-kuttl-api-config-data" not found Jan 28 17:42:32 crc kubenswrapper[5001]: E0128 17:42:32.220913 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data podName:f93149f4-7bcc-4c27-a4c6-93da8ac7693b nodeName:}" failed. No retries permitted until 2026-01-28 17:42:34.220885955 +0000 UTC m=+1600.388674185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data") pod "nova-kuttl-api-0" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b") : secret "nova-kuttl-api-config-data" not found Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.230406 5001 scope.go:117] "RemoveContainer" containerID="7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e" Jan 28 17:42:32 crc kubenswrapper[5001]: E0128 17:42:32.230948 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e\": container with ID starting with 7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e not found: ID does not exist" containerID="7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.231073 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e"} err="failed to get container status \"7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e\": rpc error: code = NotFound desc = could not find container \"7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e\": container with ID starting with 7aacb92facc3dd83a3b4b66af50df06b554ec71673769abd3a77a945addc608e not found: ID does not exist" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.232193 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" podStartSLOduration=2.232165921 podStartE2EDuration="2.232165921s" podCreationTimestamp="2026-01-28 17:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:42:32.225838278 +0000 UTC m=+1598.393626528" watchObservedRunningTime="2026-01-28 17:42:32.232165921 +0000 UTC m=+1598.399954151" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.256501 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.263470 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.607703 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef524d0-e581-4c10-a29e-34b123dbda85" path="/var/lib/kubelet/pods/0ef524d0-e581-4c10-a29e-34b123dbda85/volumes" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.608222 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aeb8343-bfe8-4d4f-80ff-50fb55910691" path="/var/lib/kubelet/pods/9aeb8343-bfe8-4d4f-80ff-50fb55910691/volumes" Jan 28 17:42:32 crc kubenswrapper[5001]: I0128 17:42:32.608895 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2cb1b11-68d6-435d-828c-a3b6138fc903" path="/var/lib/kubelet/pods/d2cb1b11-68d6-435d-828c-a3b6138fc903/volumes" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.196587 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.208520 5001 generic.go:334] "Generic (PLEG): container finished" podID="3f483fd4-461b-47b8-9e21-0398c809539c" containerID="8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a" exitCode=0 Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.208578 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.208611 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3f483fd4-461b-47b8-9e21-0398c809539c","Type":"ContainerDied","Data":"8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a"} Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.208651 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3f483fd4-461b-47b8-9e21-0398c809539c","Type":"ContainerDied","Data":"38e724486ab6a166e9ce9ee2d91e7763b5bc0c96486d8880adc682a171cfb99a"} Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.208687 5001 scope.go:117] "RemoveContainer" containerID="8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.210371 5001 generic.go:334] "Generic (PLEG): container finished" podID="e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" containerID="e27297bdb546fbbcfdf66dd7f6a4608570c116d79fa831e27a155b5f3e9b77bd" exitCode=0 Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.210421 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" event={"ID":"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb","Type":"ContainerDied","Data":"e27297bdb546fbbcfdf66dd7f6a4608570c116d79fa831e27a155b5f3e9b77bd"} Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.239605 5001 scope.go:117] "RemoveContainer" containerID="8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a" Jan 28 17:42:33 crc kubenswrapper[5001]: E0128 17:42:33.240023 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a\": container with ID starting with 8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a not found: ID does not exist" containerID="8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.240051 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a"} err="failed to get container status \"8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a\": rpc error: code = NotFound desc = could not find container \"8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a\": container with ID starting with 8932faba297bf3287e204ee8c0840334f493e95c8295907e9e8d74f54d94091a not found: ID does not exist" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.346051 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/3f483fd4-461b-47b8-9e21-0398c809539c-kube-api-access-jbfrg\") pod \"3f483fd4-461b-47b8-9e21-0398c809539c\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.346102 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f483fd4-461b-47b8-9e21-0398c809539c-config-data\") pod \"3f483fd4-461b-47b8-9e21-0398c809539c\" (UID: \"3f483fd4-461b-47b8-9e21-0398c809539c\") " Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.350623 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f483fd4-461b-47b8-9e21-0398c809539c-kube-api-access-jbfrg" (OuterVolumeSpecName: "kube-api-access-jbfrg") pod "3f483fd4-461b-47b8-9e21-0398c809539c" (UID: "3f483fd4-461b-47b8-9e21-0398c809539c"). InnerVolumeSpecName "kube-api-access-jbfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.384443 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f483fd4-461b-47b8-9e21-0398c809539c-config-data" (OuterVolumeSpecName: "config-data") pod "3f483fd4-461b-47b8-9e21-0398c809539c" (UID: "3f483fd4-461b-47b8-9e21-0398c809539c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.447934 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/3f483fd4-461b-47b8-9e21-0398c809539c-kube-api-access-jbfrg\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.447970 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f483fd4-461b-47b8-9e21-0398c809539c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.583656 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.588043 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.593859 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.692521 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptzd7\" (UniqueName: \"kubernetes.io/projected/c7e18326-2af3-4c01-a883-0aa78a1ca37e-kube-api-access-ptzd7\") pod \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.692614 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e18326-2af3-4c01-a883-0aa78a1ca37e-operator-scripts\") pod \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\" (UID: \"c7e18326-2af3-4c01-a883-0aa78a1ca37e\") " Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.693719 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e18326-2af3-4c01-a883-0aa78a1ca37e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7e18326-2af3-4c01-a883-0aa78a1ca37e" (UID: "c7e18326-2af3-4c01-a883-0aa78a1ca37e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.705192 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e18326-2af3-4c01-a883-0aa78a1ca37e-kube-api-access-ptzd7" (OuterVolumeSpecName: "kube-api-access-ptzd7") pod "c7e18326-2af3-4c01-a883-0aa78a1ca37e" (UID: "c7e18326-2af3-4c01-a883-0aa78a1ca37e"). InnerVolumeSpecName "kube-api-access-ptzd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.786054 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.794978 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptzd7\" (UniqueName: \"kubernetes.io/projected/c7e18326-2af3-4c01-a883-0aa78a1ca37e-kube-api-access-ptzd7\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.795033 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e18326-2af3-4c01-a883-0aa78a1ca37e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.895817 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-727tt\" (UniqueName: \"kubernetes.io/projected/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-kube-api-access-727tt\") pod \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.895911 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-operator-scripts\") pod \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\" (UID: \"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b\") " Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.896894 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" (UID: "b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.901202 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-kube-api-access-727tt" (OuterVolumeSpecName: "kube-api-access-727tt") pod "b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" (UID: "b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b"). InnerVolumeSpecName "kube-api-access-727tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.999142 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": read tcp 10.217.0.2:59276->10.217.0.138:8775: read: connection reset by peer" Jan 28 17:42:33 crc kubenswrapper[5001]: I0128 17:42:33.999479 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": read tcp 10.217.0.2:59264->10.217.0.138:8775: read: connection reset by peer" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.000259 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-727tt\" (UniqueName: \"kubernetes.io/projected/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-kube-api-access-727tt\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.000300 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.064306 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.065612 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.067109 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.067148 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="52810d1f-ed60-4852-bcc1-88004d24baf6" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.100499 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.201890 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l48rp\" (UniqueName: \"kubernetes.io/projected/1f74f30e-eae5-44b0-b858-08841c899345-kube-api-access-l48rp\") pod \"1f74f30e-eae5-44b0-b858-08841c899345\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.202359 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f74f30e-eae5-44b0-b858-08841c899345-config-data\") pod \"1f74f30e-eae5-44b0-b858-08841c899345\" (UID: \"1f74f30e-eae5-44b0-b858-08841c899345\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.206514 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f74f30e-eae5-44b0-b858-08841c899345-kube-api-access-l48rp" (OuterVolumeSpecName: "kube-api-access-l48rp") pod "1f74f30e-eae5-44b0-b858-08841c899345" (UID: "1f74f30e-eae5-44b0-b858-08841c899345"). InnerVolumeSpecName "kube-api-access-l48rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.225519 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" event={"ID":"b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b","Type":"ContainerDied","Data":"2fc10ce39fbdea043b46edf35d57e92a5cacb42869bef7709636c9aae150c37c"} Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.225564 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fc10ce39fbdea043b46edf35d57e92a5cacb42869bef7709636c9aae150c37c" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.225574 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell091b9-account-delete-7hcf4" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.227332 5001 generic.go:334] "Generic (PLEG): container finished" podID="1f74f30e-eae5-44b0-b858-08841c899345" containerID="fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209" exitCode=0 Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.227407 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1f74f30e-eae5-44b0-b858-08841c899345","Type":"ContainerDied","Data":"fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209"} Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.227440 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1f74f30e-eae5-44b0-b858-08841c899345","Type":"ContainerDied","Data":"10aa9f8261ff23a68835af2bddceadb14bf4a0d798bd8628f88e718284b7c8ec"} Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.227462 5001 scope.go:117] "RemoveContainer" containerID="fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.227605 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.231309 5001 generic.go:334] "Generic (PLEG): container finished" podID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerID="36cc1c44917cfe711d55bf35e461294e5aff8be185280bc68094b842e25e6236" exitCode=0 Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.231374 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6c9fdbf4-1b79-451b-90e9-5a437ee517b8","Type":"ContainerDied","Data":"36cc1c44917cfe711d55bf35e461294e5aff8be185280bc68094b842e25e6236"} Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.233881 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.234748 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f74f30e-eae5-44b0-b858-08841c899345-config-data" (OuterVolumeSpecName: "config-data") pod "1f74f30e-eae5-44b0-b858-08841c899345" (UID: "1f74f30e-eae5-44b0-b858-08841c899345"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.234796 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi2588-account-delete-dnhxc" event={"ID":"c7e18326-2af3-4c01-a883-0aa78a1ca37e","Type":"ContainerDied","Data":"4d08151cd368186ddf328d274a55a9c1ce9f463c86964ce1a7331523f963bb0c"} Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.234823 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d08151cd368186ddf328d274a55a9c1ce9f463c86964ce1a7331523f963bb0c" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.258308 5001 scope.go:117] "RemoveContainer" containerID="fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209" Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.258890 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209\": container with ID starting with fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209 not found: ID does not exist" containerID="fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.258929 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209"} err="failed to get container status \"fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209\": rpc error: code = NotFound desc = could not find container \"fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209\": container with ID starting with fb57ab9e1ccb67a23118791f0f0a107ad4928f86480088d664ab999b7f567209 not found: ID does not exist" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.304938 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f74f30e-eae5-44b0-b858-08841c899345-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.304999 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l48rp\" (UniqueName: \"kubernetes.io/projected/1f74f30e-eae5-44b0-b858-08841c899345-kube-api-access-l48rp\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.305103 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-api-config-data: secret "nova-kuttl-api-config-data" not found Jan 28 17:42:34 crc kubenswrapper[5001]: E0128 17:42:34.305163 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data podName:f93149f4-7bcc-4c27-a4c6-93da8ac7693b nodeName:}" failed. No retries permitted until 2026-01-28 17:42:38.305142469 +0000 UTC m=+1604.472930689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data") pod "nova-kuttl-api-0" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b") : secret "nova-kuttl-api-config-data" not found Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.479401 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.585160 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.610376 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f483fd4-461b-47b8-9e21-0398c809539c" path="/var/lib/kubelet/pods/3f483fd4-461b-47b8-9e21-0398c809539c/volumes" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.611558 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-logs\") pod \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.611646 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn55g\" (UniqueName: \"kubernetes.io/projected/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-kube-api-access-pn55g\") pod \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.611670 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-config-data\") pod \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\" (UID: \"6c9fdbf4-1b79-451b-90e9-5a437ee517b8\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.612096 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-logs" (OuterVolumeSpecName: "logs") pod "6c9fdbf4-1b79-451b-90e9-5a437ee517b8" (UID: "6c9fdbf4-1b79-451b-90e9-5a437ee517b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.613224 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.615440 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-kube-api-access-pn55g" (OuterVolumeSpecName: "kube-api-access-pn55g") pod "6c9fdbf4-1b79-451b-90e9-5a437ee517b8" (UID: "6c9fdbf4-1b79-451b-90e9-5a437ee517b8"). InnerVolumeSpecName "kube-api-access-pn55g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.620163 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.652306 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-config-data" (OuterVolumeSpecName: "config-data") pod "6c9fdbf4-1b79-451b-90e9-5a437ee517b8" (UID: "6c9fdbf4-1b79-451b-90e9-5a437ee517b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.683076 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.714472 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn55g\" (UniqueName: \"kubernetes.io/projected/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-kube-api-access-pn55g\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.714495 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9fdbf4-1b79-451b-90e9-5a437ee517b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.816589 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q2tm\" (UniqueName: \"kubernetes.io/projected/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-kube-api-access-4q2tm\") pod \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.816679 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-operator-scripts\") pod \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\" (UID: \"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb\") " Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.817484 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" (UID: "e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.822512 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-kube-api-access-4q2tm" (OuterVolumeSpecName: "kube-api-access-4q2tm") pod "e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" (UID: "e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb"). InnerVolumeSpecName "kube-api-access-4q2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.921973 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q2tm\" (UniqueName: \"kubernetes.io/projected/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-kube-api-access-4q2tm\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:34 crc kubenswrapper[5001]: I0128 17:42:34.922316 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.040447 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.124622 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-logs\") pod \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.124702 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data\") pod \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.124877 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64hwg\" (UniqueName: \"kubernetes.io/projected/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-kube-api-access-64hwg\") pod \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\" (UID: \"f93149f4-7bcc-4c27-a4c6-93da8ac7693b\") " Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.125095 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-logs" (OuterVolumeSpecName: "logs") pod "f93149f4-7bcc-4c27-a4c6-93da8ac7693b" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.129128 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-kube-api-access-64hwg" (OuterVolumeSpecName: "kube-api-access-64hwg") pod "f93149f4-7bcc-4c27-a4c6-93da8ac7693b" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b"). InnerVolumeSpecName "kube-api-access-64hwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.147233 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data" (OuterVolumeSpecName: "config-data") pod "f93149f4-7bcc-4c27-a4c6-93da8ac7693b" (UID: "f93149f4-7bcc-4c27-a4c6-93da8ac7693b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.226407 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.226444 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.226456 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64hwg\" (UniqueName: \"kubernetes.io/projected/f93149f4-7bcc-4c27-a4c6-93da8ac7693b-kube-api-access-64hwg\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.245571 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6c9fdbf4-1b79-451b-90e9-5a437ee517b8","Type":"ContainerDied","Data":"2915c336b1d201f065c98450a7a9f638395725a2dcfe9b2c12debcf9f823fe03"} Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.245627 5001 scope.go:117] "RemoveContainer" containerID="36cc1c44917cfe711d55bf35e461294e5aff8be185280bc68094b842e25e6236" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.245737 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.251169 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" event={"ID":"e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb","Type":"ContainerDied","Data":"ced29fff8386da22aeb7f6f098cc6b6ad7e231f52cd98cf525293838107b254d"} Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.251239 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced29fff8386da22aeb7f6f098cc6b6ad7e231f52cd98cf525293838107b254d" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.251316 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1455c-account-delete-d56r6" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.254122 5001 generic.go:334] "Generic (PLEG): container finished" podID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerID="86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2" exitCode=0 Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.254225 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"f93149f4-7bcc-4c27-a4c6-93da8ac7693b","Type":"ContainerDied","Data":"86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2"} Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.254259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"f93149f4-7bcc-4c27-a4c6-93da8ac7693b","Type":"ContainerDied","Data":"e1c4f43c6dc9a87bbdf822de6a410e01edb3b4cd0dda28a292c40e7adf030d3f"} Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.254295 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.282245 5001 scope.go:117] "RemoveContainer" containerID="ce6123a504887e59a301819e3e5351759fd9d6b6ba771716defe2d505f461c60" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.299469 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.308515 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.315218 5001 scope.go:117] "RemoveContainer" containerID="86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.317492 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.324865 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.337548 5001 scope.go:117] "RemoveContainer" containerID="6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.353139 5001 scope.go:117] "RemoveContainer" containerID="86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2" Jan 28 17:42:35 crc kubenswrapper[5001]: E0128 17:42:35.353621 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2\": container with ID starting with 86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2 not found: ID does not exist" containerID="86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.353649 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2"} err="failed to get container status \"86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2\": rpc error: code = NotFound desc = could not find container \"86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2\": container with ID starting with 86f2ffabfa17b389782980010cb581c76862d2e8368983c5f15b3f426679f4a2 not found: ID does not exist" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.353691 5001 scope.go:117] "RemoveContainer" containerID="6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1" Jan 28 17:42:35 crc kubenswrapper[5001]: E0128 17:42:35.353942 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1\": container with ID starting with 6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1 not found: ID does not exist" containerID="6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.353961 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1"} err="failed to get container status \"6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1\": rpc error: code = NotFound desc = could not find container \"6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1\": container with ID starting with 6baac5d0503d3313d14fdedb79c7a79ee87ddcffaceb6d92aede2c83ae3334b1 not found: ID does not exist" Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.485953 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-wgwgx"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.493150 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-wgwgx"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.500795 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.506047 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell091b9-account-delete-7hcf4"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.511146 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-91b9-account-create-update-9fmrb"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.516156 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell091b9-account-delete-7hcf4"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.599026 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-chbgv"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.608242 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-chbgv"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.623862 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi2588-account-delete-dnhxc"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.630352 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-2588-account-create-update-ngjsm"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.646050 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi2588-account-delete-dnhxc"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.674055 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-2588-account-create-update-ngjsm"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.692893 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-72lfm"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.701254 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-72lfm"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.714044 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1455c-account-delete-d56r6"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.720767 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.726106 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1455c-account-delete-d56r6"] Jan 28 17:42:35 crc kubenswrapper[5001]: I0128 17:42:35.730850 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-455c-account-create-update-8fpxl"] Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.603818 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f74f30e-eae5-44b0-b858-08841c899345" path="/var/lib/kubelet/pods/1f74f30e-eae5-44b0-b858-08841c899345/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.605090 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5487ccf4-adee-4c40-bef7-75373ee69307" path="/var/lib/kubelet/pods/5487ccf4-adee-4c40-bef7-75373ee69307/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.605765 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" path="/var/lib/kubelet/pods/6c9fdbf4-1b79-451b-90e9-5a437ee517b8/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.607267 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e06436b-4009-4d8c-81ec-680b1fc02b76" path="/var/lib/kubelet/pods/6e06436b-4009-4d8c-81ec-680b1fc02b76/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.608080 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a2bd48-0920-45b1-bb67-816da79f3160" path="/var/lib/kubelet/pods/99a2bd48-0920-45b1-bb67-816da79f3160/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.608871 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3732ce-50e1-4e2e-b082-f8b9b984226b" path="/var/lib/kubelet/pods/9f3732ce-50e1-4e2e-b082-f8b9b984226b/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.609509 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" path="/var/lib/kubelet/pods/b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.610706 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39d7310-6252-4b44-82e3-0a239050e52d" path="/var/lib/kubelet/pods/c39d7310-6252-4b44-82e3-0a239050e52d/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.611569 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e18326-2af3-4c01-a883-0aa78a1ca37e" path="/var/lib/kubelet/pods/c7e18326-2af3-4c01-a883-0aa78a1ca37e/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.612215 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" path="/var/lib/kubelet/pods/e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.618782 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea91c114-af0b-41fe-a820-09a2dc5c555d" path="/var/lib/kubelet/pods/ea91c114-af0b-41fe-a820-09a2dc5c555d/volumes" Jan 28 17:42:36 crc kubenswrapper[5001]: I0128 17:42:36.619369 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" path="/var/lib/kubelet/pods/f93149f4-7bcc-4c27-a4c6-93da8ac7693b/volumes" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925095 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-hnv54"] Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925725 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f483fd4-461b-47b8-9e21-0398c809539c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925743 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f483fd4-461b-47b8-9e21-0398c809539c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925753 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-log" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925761 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-log" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925776 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f74f30e-eae5-44b0-b858-08841c899345" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925784 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f74f30e-eae5-44b0-b858-08841c899345" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925799 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925806 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925817 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-log" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925824 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-log" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925833 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-api" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925839 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-api" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925854 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e18326-2af3-4c01-a883-0aa78a1ca37e" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925860 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e18326-2af3-4c01-a883-0aa78a1ca37e" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925875 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925881 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925897 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aeb8343-bfe8-4d4f-80ff-50fb55910691" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925904 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aeb8343-bfe8-4d4f-80ff-50fb55910691" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:42:37 crc kubenswrapper[5001]: E0128 17:42:37.925916 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-metadata" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.925923 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-metadata" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926126 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-log" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926139 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f483fd4-461b-47b8-9e21-0398c809539c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926156 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b140d13f-8cc5-4cec-b2e2-589ea7bbfd1b" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926169 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e18326-2af3-4c01-a883-0aa78a1ca37e" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926182 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93149f4-7bcc-4c27-a4c6-93da8ac7693b" containerName="nova-kuttl-api-api" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926193 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-log" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926203 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aeb8343-bfe8-4d4f-80ff-50fb55910691" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926213 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f74f30e-eae5-44b0-b858-08841c899345" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926225 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9fdbf4-1b79-451b-90e9-5a437ee517b8" containerName="nova-kuttl-metadata-metadata" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.926237 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f5e61e-5b48-4ad0-af0a-761a19c4f5cb" containerName="mariadb-account-delete" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.928025 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:37 crc kubenswrapper[5001]: I0128 17:42:37.944987 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hnv54"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.032051 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-89s8q"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.033535 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.045254 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-89s8q"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.066275 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjt2v\" (UniqueName: \"kubernetes.io/projected/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-kube-api-access-tjt2v\") pod \"nova-api-db-create-hnv54\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.066382 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-operator-scripts\") pod \"nova-api-db-create-hnv54\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.138072 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-4401-account-create-update-8l8vr"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.138993 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.141171 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.171496 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjt2v\" (UniqueName: \"kubernetes.io/projected/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-kube-api-access-tjt2v\") pod \"nova-api-db-create-hnv54\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.171580 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-operator-scripts\") pod \"nova-cell0-db-create-89s8q\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.171622 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-operator-scripts\") pod \"nova-api-db-create-hnv54\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.171685 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj74l\" (UniqueName: \"kubernetes.io/projected/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-kube-api-access-wj74l\") pod \"nova-cell0-db-create-89s8q\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.173926 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-operator-scripts\") pod \"nova-api-db-create-hnv54\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.178624 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-4401-account-create-update-8l8vr"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.193488 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjt2v\" (UniqueName: \"kubernetes.io/projected/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-kube-api-access-tjt2v\") pod \"nova-api-db-create-hnv54\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.236287 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-nvnhv"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.237746 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.253596 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-nvnhv"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.275533 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj74l\" (UniqueName: \"kubernetes.io/projected/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-kube-api-access-wj74l\") pod \"nova-cell0-db-create-89s8q\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.275614 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jxk\" (UniqueName: \"kubernetes.io/projected/fafaadec-e27a-42a8-86e3-b128add5edc3-kube-api-access-r4jxk\") pod \"nova-api-4401-account-create-update-8l8vr\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.275670 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-operator-scripts\") pod \"nova-cell0-db-create-89s8q\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.275719 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fafaadec-e27a-42a8-86e3-b128add5edc3-operator-scripts\") pod \"nova-api-4401-account-create-update-8l8vr\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.278359 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.279801 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-operator-scripts\") pod \"nova-cell0-db-create-89s8q\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.293170 5001 generic.go:334] "Generic (PLEG): container finished" podID="52810d1f-ed60-4852-bcc1-88004d24baf6" containerID="eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b" exitCode=0 Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.293234 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"52810d1f-ed60-4852-bcc1-88004d24baf6","Type":"ContainerDied","Data":"eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b"} Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.311405 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj74l\" (UniqueName: \"kubernetes.io/projected/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-kube-api-access-wj74l\") pod \"nova-cell0-db-create-89s8q\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.342287 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.343596 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.346098 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.352049 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.365317 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.377454 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jxk\" (UniqueName: \"kubernetes.io/projected/fafaadec-e27a-42a8-86e3-b128add5edc3-kube-api-access-r4jxk\") pod \"nova-api-4401-account-create-update-8l8vr\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.377519 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef94c46-4f20-4956-a75a-ae044c4c64a9-operator-scripts\") pod \"nova-cell1-db-create-nvnhv\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.377559 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9w7r\" (UniqueName: \"kubernetes.io/projected/0ef94c46-4f20-4956-a75a-ae044c4c64a9-kube-api-access-d9w7r\") pod \"nova-cell1-db-create-nvnhv\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.377629 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fafaadec-e27a-42a8-86e3-b128add5edc3-operator-scripts\") pod \"nova-api-4401-account-create-update-8l8vr\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.378477 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fafaadec-e27a-42a8-86e3-b128add5edc3-operator-scripts\") pod \"nova-api-4401-account-create-update-8l8vr\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.394781 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jxk\" (UniqueName: \"kubernetes.io/projected/fafaadec-e27a-42a8-86e3-b128add5edc3-kube-api-access-r4jxk\") pod \"nova-api-4401-account-create-update-8l8vr\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.438559 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.459601 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.479092 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-operator-scripts\") pod \"nova-cell0-5d7a-account-create-update-9z49t\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.479250 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczjh\" (UniqueName: \"kubernetes.io/projected/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-kube-api-access-tczjh\") pod \"nova-cell0-5d7a-account-create-update-9z49t\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.479483 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef94c46-4f20-4956-a75a-ae044c4c64a9-operator-scripts\") pod \"nova-cell1-db-create-nvnhv\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.479601 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9w7r\" (UniqueName: \"kubernetes.io/projected/0ef94c46-4f20-4956-a75a-ae044c4c64a9-kube-api-access-d9w7r\") pod \"nova-cell1-db-create-nvnhv\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.480145 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef94c46-4f20-4956-a75a-ae044c4c64a9-operator-scripts\") pod \"nova-cell1-db-create-nvnhv\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.503732 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9w7r\" (UniqueName: \"kubernetes.io/projected/0ef94c46-4f20-4956-a75a-ae044c4c64a9-kube-api-access-d9w7r\") pod \"nova-cell1-db-create-nvnhv\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.533080 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv"] Jan 28 17:42:38 crc kubenswrapper[5001]: E0128 17:42:38.533493 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52810d1f-ed60-4852-bcc1-88004d24baf6" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.533506 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="52810d1f-ed60-4852-bcc1-88004d24baf6" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.533668 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="52810d1f-ed60-4852-bcc1-88004d24baf6" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.534216 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.536242 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.544344 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.563965 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.581085 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52810d1f-ed60-4852-bcc1-88004d24baf6-config-data\") pod \"52810d1f-ed60-4852-bcc1-88004d24baf6\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.581149 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jgzc\" (UniqueName: \"kubernetes.io/projected/52810d1f-ed60-4852-bcc1-88004d24baf6-kube-api-access-8jgzc\") pod \"52810d1f-ed60-4852-bcc1-88004d24baf6\" (UID: \"52810d1f-ed60-4852-bcc1-88004d24baf6\") " Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.581710 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-operator-scripts\") pod \"nova-cell0-5d7a-account-create-update-9z49t\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.581766 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tczjh\" (UniqueName: \"kubernetes.io/projected/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-kube-api-access-tczjh\") pod \"nova-cell0-5d7a-account-create-update-9z49t\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.585611 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-operator-scripts\") pod \"nova-cell0-5d7a-account-create-update-9z49t\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.589345 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52810d1f-ed60-4852-bcc1-88004d24baf6-kube-api-access-8jgzc" (OuterVolumeSpecName: "kube-api-access-8jgzc") pod "52810d1f-ed60-4852-bcc1-88004d24baf6" (UID: "52810d1f-ed60-4852-bcc1-88004d24baf6"). InnerVolumeSpecName "kube-api-access-8jgzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.604723 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tczjh\" (UniqueName: \"kubernetes.io/projected/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-kube-api-access-tczjh\") pod \"nova-cell0-5d7a-account-create-update-9z49t\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.612015 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52810d1f-ed60-4852-bcc1-88004d24baf6-config-data" (OuterVolumeSpecName: "config-data") pod "52810d1f-ed60-4852-bcc1-88004d24baf6" (UID: "52810d1f-ed60-4852-bcc1-88004d24baf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.683238 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e4cae00-3e87-4f4b-9222-ef8633872283-operator-scripts\") pod \"nova-cell1-8e0b-account-create-update-rcnsv\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.683465 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtl5f\" (UniqueName: \"kubernetes.io/projected/3e4cae00-3e87-4f4b-9222-ef8633872283-kube-api-access-vtl5f\") pod \"nova-cell1-8e0b-account-create-update-rcnsv\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.683555 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jgzc\" (UniqueName: \"kubernetes.io/projected/52810d1f-ed60-4852-bcc1-88004d24baf6-kube-api-access-8jgzc\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.683575 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52810d1f-ed60-4852-bcc1-88004d24baf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.752712 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.770584 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hnv54"] Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.785219 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtl5f\" (UniqueName: \"kubernetes.io/projected/3e4cae00-3e87-4f4b-9222-ef8633872283-kube-api-access-vtl5f\") pod \"nova-cell1-8e0b-account-create-update-rcnsv\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.785643 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e4cae00-3e87-4f4b-9222-ef8633872283-operator-scripts\") pod \"nova-cell1-8e0b-account-create-update-rcnsv\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.786310 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e4cae00-3e87-4f4b-9222-ef8633872283-operator-scripts\") pod \"nova-cell1-8e0b-account-create-update-rcnsv\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.805902 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtl5f\" (UniqueName: \"kubernetes.io/projected/3e4cae00-3e87-4f4b-9222-ef8633872283-kube-api-access-vtl5f\") pod \"nova-cell1-8e0b-account-create-update-rcnsv\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.862226 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:38 crc kubenswrapper[5001]: I0128 17:42:38.915525 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-89s8q"] Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.012258 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-4401-account-create-update-8l8vr"] Jan 28 17:42:39 crc kubenswrapper[5001]: W0128 17:42:39.028030 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfafaadec_e27a_42a8_86e3_b128add5edc3.slice/crio-7fa468aa6aed98be1fea7f1346aa84fc98e4a85582d609b93ff355db6c66d37e WatchSource:0}: Error finding container 7fa468aa6aed98be1fea7f1346aa84fc98e4a85582d609b93ff355db6c66d37e: Status 404 returned error can't find the container with id 7fa468aa6aed98be1fea7f1346aa84fc98e4a85582d609b93ff355db6c66d37e Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.150585 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-nvnhv"] Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.300676 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t"] Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.304558 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" event={"ID":"0ef94c46-4f20-4956-a75a-ae044c4c64a9","Type":"ContainerStarted","Data":"7053a0dff53cfe51f71128fa96e652f6e13a5b523d4d0aca64d56c0929cee1f3"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.306819 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hnv54" event={"ID":"bec96a81-f6e1-4b8d-8955-cb7e63ae243d","Type":"ContainerStarted","Data":"1cbdae035b35c91eb80a0768161a9fce9beb3df3d1d52410a3e8bd8dbb94b566"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.306862 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hnv54" event={"ID":"bec96a81-f6e1-4b8d-8955-cb7e63ae243d","Type":"ContainerStarted","Data":"e2ee26bd0f3104f7fdd73ae0760b40a5d4c2fdd993975e2de56469089ec990f3"} Jan 28 17:42:39 crc kubenswrapper[5001]: W0128 17:42:39.307322 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e1187ee_bb5c_4107_87b8_eee20cc5ef51.slice/crio-34d58d7a5f39814637d767f3bfa0577d01b8e0a6ad906ad4e97a8952986e9ac4 WatchSource:0}: Error finding container 34d58d7a5f39814637d767f3bfa0577d01b8e0a6ad906ad4e97a8952986e9ac4: Status 404 returned error can't find the container with id 34d58d7a5f39814637d767f3bfa0577d01b8e0a6ad906ad4e97a8952986e9ac4 Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.310038 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" event={"ID":"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3","Type":"ContainerStarted","Data":"34a4e140e1f55be22484c8c22761693863cf40b1c5716d185f368d2de31c57dc"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.310077 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" event={"ID":"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3","Type":"ContainerStarted","Data":"8c1ca8d47847d7e8cd5c771c81ba33de7a6571f0b75c21386206b27bfc6e4051"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.312943 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"52810d1f-ed60-4852-bcc1-88004d24baf6","Type":"ContainerDied","Data":"a0b05873102799833f5cf58d521353952d4de1d45d0a3a1fa03fd906c2ddc027"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.313048 5001 scope.go:117] "RemoveContainer" containerID="eada5777af6487a568311314fc905ea62b7c2f3ab14eba6352bfaa782ba83c8b" Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.312962 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.323210 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" event={"ID":"fafaadec-e27a-42a8-86e3-b128add5edc3","Type":"ContainerStarted","Data":"d03933df35afaa9bcc74e4255c2123d197033695fe6d97b42837411ee3e8acbf"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.323288 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" event={"ID":"fafaadec-e27a-42a8-86e3-b128add5edc3","Type":"ContainerStarted","Data":"7fa468aa6aed98be1fea7f1346aa84fc98e4a85582d609b93ff355db6c66d37e"} Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.329060 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-hnv54" podStartSLOduration=2.329042364 podStartE2EDuration="2.329042364s" podCreationTimestamp="2026-01-28 17:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:42:39.323269237 +0000 UTC m=+1605.491057477" watchObservedRunningTime="2026-01-28 17:42:39.329042364 +0000 UTC m=+1605.496830584" Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.342655 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.350475 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.356529 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" podStartSLOduration=1.356506967 podStartE2EDuration="1.356506967s" podCreationTimestamp="2026-01-28 17:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:42:39.354588902 +0000 UTC m=+1605.522377132" watchObservedRunningTime="2026-01-28 17:42:39.356506967 +0000 UTC m=+1605.524295197" Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.392802 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" podStartSLOduration=1.3927756740000001 podStartE2EDuration="1.392775674s" podCreationTimestamp="2026-01-28 17:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:42:39.387486142 +0000 UTC m=+1605.555274372" watchObservedRunningTime="2026-01-28 17:42:39.392775674 +0000 UTC m=+1605.560563904" Jan 28 17:42:39 crc kubenswrapper[5001]: I0128 17:42:39.430288 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv"] Jan 28 17:42:39 crc kubenswrapper[5001]: W0128 17:42:39.436260 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e4cae00_3e87_4f4b_9222_ef8633872283.slice/crio-738a83aee8860d081b937235f68cc2ef64c3b7ee8f4ee0658f0d6b264f4a97d7 WatchSource:0}: Error finding container 738a83aee8860d081b937235f68cc2ef64c3b7ee8f4ee0658f0d6b264f4a97d7: Status 404 returned error can't find the container with id 738a83aee8860d081b937235f68cc2ef64c3b7ee8f4ee0658f0d6b264f4a97d7 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.490885 5001 generic.go:334] "Generic (PLEG): container finished" podID="fafaadec-e27a-42a8-86e3-b128add5edc3" containerID="d03933df35afaa9bcc74e4255c2123d197033695fe6d97b42837411ee3e8acbf" exitCode=0 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.491565 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" event={"ID":"fafaadec-e27a-42a8-86e3-b128add5edc3","Type":"ContainerDied","Data":"d03933df35afaa9bcc74e4255c2123d197033695fe6d97b42837411ee3e8acbf"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.512688 5001 generic.go:334] "Generic (PLEG): container finished" podID="8e1187ee-bb5c-4107-87b8-eee20cc5ef51" containerID="b0c6ad716ebec2128d1638c90aaf9082b6b61f49db17f75126c559ce249891ef" exitCode=0 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.512766 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" event={"ID":"8e1187ee-bb5c-4107-87b8-eee20cc5ef51","Type":"ContainerDied","Data":"b0c6ad716ebec2128d1638c90aaf9082b6b61f49db17f75126c559ce249891ef"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.512796 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" event={"ID":"8e1187ee-bb5c-4107-87b8-eee20cc5ef51","Type":"ContainerStarted","Data":"34d58d7a5f39814637d767f3bfa0577d01b8e0a6ad906ad4e97a8952986e9ac4"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.514550 5001 generic.go:334] "Generic (PLEG): container finished" podID="0ef94c46-4f20-4956-a75a-ae044c4c64a9" containerID="c4277f1a5701d2934afceafcc83652d5616fa398f2caa7ec70e5eaec890cbd69" exitCode=0 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.514593 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" event={"ID":"0ef94c46-4f20-4956-a75a-ae044c4c64a9","Type":"ContainerDied","Data":"c4277f1a5701d2934afceafcc83652d5616fa398f2caa7ec70e5eaec890cbd69"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.515825 5001 generic.go:334] "Generic (PLEG): container finished" podID="bec96a81-f6e1-4b8d-8955-cb7e63ae243d" containerID="1cbdae035b35c91eb80a0768161a9fce9beb3df3d1d52410a3e8bd8dbb94b566" exitCode=0 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.515882 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hnv54" event={"ID":"bec96a81-f6e1-4b8d-8955-cb7e63ae243d","Type":"ContainerDied","Data":"1cbdae035b35c91eb80a0768161a9fce9beb3df3d1d52410a3e8bd8dbb94b566"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.517476 5001 generic.go:334] "Generic (PLEG): container finished" podID="3e4cae00-3e87-4f4b-9222-ef8633872283" containerID="77db723e95a5025abcdffb0e2dcd81478fde4906f3fd6c48fcd24a0bfc58f003" exitCode=0 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.517567 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" event={"ID":"3e4cae00-3e87-4f4b-9222-ef8633872283","Type":"ContainerDied","Data":"77db723e95a5025abcdffb0e2dcd81478fde4906f3fd6c48fcd24a0bfc58f003"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.517582 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" event={"ID":"3e4cae00-3e87-4f4b-9222-ef8633872283","Type":"ContainerStarted","Data":"738a83aee8860d081b937235f68cc2ef64c3b7ee8f4ee0658f0d6b264f4a97d7"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.518870 5001 generic.go:334] "Generic (PLEG): container finished" podID="7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" containerID="34a4e140e1f55be22484c8c22761693863cf40b1c5716d185f368d2de31c57dc" exitCode=0 Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.518910 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" event={"ID":"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3","Type":"ContainerDied","Data":"34a4e140e1f55be22484c8c22761693863cf40b1c5716d185f368d2de31c57dc"} Jan 28 17:42:40 crc kubenswrapper[5001]: I0128 17:42:40.613519 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52810d1f-ed60-4852-bcc1-88004d24baf6" path="/var/lib/kubelet/pods/52810d1f-ed60-4852-bcc1-88004d24baf6/volumes" Jan 28 17:42:41 crc kubenswrapper[5001]: I0128 17:42:41.915629 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.015667 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tczjh\" (UniqueName: \"kubernetes.io/projected/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-kube-api-access-tczjh\") pod \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.015745 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-operator-scripts\") pod \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\" (UID: \"8e1187ee-bb5c-4107-87b8-eee20cc5ef51\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.016423 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e1187ee-bb5c-4107-87b8-eee20cc5ef51" (UID: "8e1187ee-bb5c-4107-87b8-eee20cc5ef51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.056309 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-kube-api-access-tczjh" (OuterVolumeSpecName: "kube-api-access-tczjh") pod "8e1187ee-bb5c-4107-87b8-eee20cc5ef51" (UID: "8e1187ee-bb5c-4107-87b8-eee20cc5ef51"). InnerVolumeSpecName "kube-api-access-tczjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.117482 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.117512 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tczjh\" (UniqueName: \"kubernetes.io/projected/8e1187ee-bb5c-4107-87b8-eee20cc5ef51-kube-api-access-tczjh\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.195693 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.202306 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.215136 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.222734 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.234681 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320158 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef94c46-4f20-4956-a75a-ae044c4c64a9-operator-scripts\") pod \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320288 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4jxk\" (UniqueName: \"kubernetes.io/projected/fafaadec-e27a-42a8-86e3-b128add5edc3-kube-api-access-r4jxk\") pod \"fafaadec-e27a-42a8-86e3-b128add5edc3\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320313 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjt2v\" (UniqueName: \"kubernetes.io/projected/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-kube-api-access-tjt2v\") pod \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320363 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-operator-scripts\") pod \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320395 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj74l\" (UniqueName: \"kubernetes.io/projected/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-kube-api-access-wj74l\") pod \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\" (UID: \"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320426 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9w7r\" (UniqueName: \"kubernetes.io/projected/0ef94c46-4f20-4956-a75a-ae044c4c64a9-kube-api-access-d9w7r\") pod \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\" (UID: \"0ef94c46-4f20-4956-a75a-ae044c4c64a9\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320463 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fafaadec-e27a-42a8-86e3-b128add5edc3-operator-scripts\") pod \"fafaadec-e27a-42a8-86e3-b128add5edc3\" (UID: \"fafaadec-e27a-42a8-86e3-b128add5edc3\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.320500 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-operator-scripts\") pod \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\" (UID: \"bec96a81-f6e1-4b8d-8955-cb7e63ae243d\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.321115 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef94c46-4f20-4956-a75a-ae044c4c64a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef94c46-4f20-4956-a75a-ae044c4c64a9" (UID: "0ef94c46-4f20-4956-a75a-ae044c4c64a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.321593 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" (UID: "7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.321696 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bec96a81-f6e1-4b8d-8955-cb7e63ae243d" (UID: "bec96a81-f6e1-4b8d-8955-cb7e63ae243d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.321693 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafaadec-e27a-42a8-86e3-b128add5edc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fafaadec-e27a-42a8-86e3-b128add5edc3" (UID: "fafaadec-e27a-42a8-86e3-b128add5edc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.323225 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef94c46-4f20-4956-a75a-ae044c4c64a9-kube-api-access-d9w7r" (OuterVolumeSpecName: "kube-api-access-d9w7r") pod "0ef94c46-4f20-4956-a75a-ae044c4c64a9" (UID: "0ef94c46-4f20-4956-a75a-ae044c4c64a9"). InnerVolumeSpecName "kube-api-access-d9w7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.323554 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-kube-api-access-wj74l" (OuterVolumeSpecName: "kube-api-access-wj74l") pod "7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" (UID: "7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3"). InnerVolumeSpecName "kube-api-access-wj74l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.323682 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fafaadec-e27a-42a8-86e3-b128add5edc3-kube-api-access-r4jxk" (OuterVolumeSpecName: "kube-api-access-r4jxk") pod "fafaadec-e27a-42a8-86e3-b128add5edc3" (UID: "fafaadec-e27a-42a8-86e3-b128add5edc3"). InnerVolumeSpecName "kube-api-access-r4jxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.323895 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-kube-api-access-tjt2v" (OuterVolumeSpecName: "kube-api-access-tjt2v") pod "bec96a81-f6e1-4b8d-8955-cb7e63ae243d" (UID: "bec96a81-f6e1-4b8d-8955-cb7e63ae243d"). InnerVolumeSpecName "kube-api-access-tjt2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421279 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtl5f\" (UniqueName: \"kubernetes.io/projected/3e4cae00-3e87-4f4b-9222-ef8633872283-kube-api-access-vtl5f\") pod \"3e4cae00-3e87-4f4b-9222-ef8633872283\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421395 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e4cae00-3e87-4f4b-9222-ef8633872283-operator-scripts\") pod \"3e4cae00-3e87-4f4b-9222-ef8633872283\" (UID: \"3e4cae00-3e87-4f4b-9222-ef8633872283\") " Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421857 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421885 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj74l\" (UniqueName: \"kubernetes.io/projected/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3-kube-api-access-wj74l\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421900 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9w7r\" (UniqueName: \"kubernetes.io/projected/0ef94c46-4f20-4956-a75a-ae044c4c64a9-kube-api-access-d9w7r\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421912 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fafaadec-e27a-42a8-86e3-b128add5edc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421923 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421934 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef94c46-4f20-4956-a75a-ae044c4c64a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421946 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4jxk\" (UniqueName: \"kubernetes.io/projected/fafaadec-e27a-42a8-86e3-b128add5edc3-kube-api-access-r4jxk\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.421957 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjt2v\" (UniqueName: \"kubernetes.io/projected/bec96a81-f6e1-4b8d-8955-cb7e63ae243d-kube-api-access-tjt2v\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.422488 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4cae00-3e87-4f4b-9222-ef8633872283-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e4cae00-3e87-4f4b-9222-ef8633872283" (UID: "3e4cae00-3e87-4f4b-9222-ef8633872283"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.424427 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4cae00-3e87-4f4b-9222-ef8633872283-kube-api-access-vtl5f" (OuterVolumeSpecName: "kube-api-access-vtl5f") pod "3e4cae00-3e87-4f4b-9222-ef8633872283" (UID: "3e4cae00-3e87-4f4b-9222-ef8633872283"). InnerVolumeSpecName "kube-api-access-vtl5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.524705 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtl5f\" (UniqueName: \"kubernetes.io/projected/3e4cae00-3e87-4f4b-9222-ef8633872283-kube-api-access-vtl5f\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.525160 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e4cae00-3e87-4f4b-9222-ef8633872283-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.546044 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" event={"ID":"8e1187ee-bb5c-4107-87b8-eee20cc5ef51","Type":"ContainerDied","Data":"34d58d7a5f39814637d767f3bfa0577d01b8e0a6ad906ad4e97a8952986e9ac4"} Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.546094 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d58d7a5f39814637d767f3bfa0577d01b8e0a6ad906ad4e97a8952986e9ac4" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.546061 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.549141 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.549140 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-nvnhv" event={"ID":"0ef94c46-4f20-4956-a75a-ae044c4c64a9","Type":"ContainerDied","Data":"7053a0dff53cfe51f71128fa96e652f6e13a5b523d4d0aca64d56c0929cee1f3"} Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.549455 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7053a0dff53cfe51f71128fa96e652f6e13a5b523d4d0aca64d56c0929cee1f3" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.550969 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hnv54" event={"ID":"bec96a81-f6e1-4b8d-8955-cb7e63ae243d","Type":"ContainerDied","Data":"e2ee26bd0f3104f7fdd73ae0760b40a5d4c2fdd993975e2de56469089ec990f3"} Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.551026 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ee26bd0f3104f7fdd73ae0760b40a5d4c2fdd993975e2de56469089ec990f3" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.551085 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hnv54" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.559396 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" event={"ID":"3e4cae00-3e87-4f4b-9222-ef8633872283","Type":"ContainerDied","Data":"738a83aee8860d081b937235f68cc2ef64c3b7ee8f4ee0658f0d6b264f4a97d7"} Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.559435 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="738a83aee8860d081b937235f68cc2ef64c3b7ee8f4ee0658f0d6b264f4a97d7" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.559495 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.568143 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" event={"ID":"7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3","Type":"ContainerDied","Data":"8c1ca8d47847d7e8cd5c771c81ba33de7a6571f0b75c21386206b27bfc6e4051"} Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.568188 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1ca8d47847d7e8cd5c771c81ba33de7a6571f0b75c21386206b27bfc6e4051" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.568196 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-89s8q" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.569728 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" event={"ID":"fafaadec-e27a-42a8-86e3-b128add5edc3","Type":"ContainerDied","Data":"7fa468aa6aed98be1fea7f1346aa84fc98e4a85582d609b93ff355db6c66d37e"} Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.569756 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa468aa6aed98be1fea7f1346aa84fc98e4a85582d609b93ff355db6c66d37e" Jan 28 17:42:42 crc kubenswrapper[5001]: I0128 17:42:42.569813 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-4401-account-create-update-8l8vr" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.546541 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds"] Jan 28 17:42:43 crc kubenswrapper[5001]: E0128 17:42:43.547154 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec96a81-f6e1-4b8d-8955-cb7e63ae243d" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547172 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec96a81-f6e1-4b8d-8955-cb7e63ae243d" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: E0128 17:42:43.547194 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafaadec-e27a-42a8-86e3-b128add5edc3" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547202 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafaadec-e27a-42a8-86e3-b128add5edc3" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: E0128 17:42:43.547212 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547221 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: E0128 17:42:43.547235 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1187ee-bb5c-4107-87b8-eee20cc5ef51" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547242 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1187ee-bb5c-4107-87b8-eee20cc5ef51" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: E0128 17:42:43.547259 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef94c46-4f20-4956-a75a-ae044c4c64a9" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547267 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef94c46-4f20-4956-a75a-ae044c4c64a9" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: E0128 17:42:43.547275 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4cae00-3e87-4f4b-9222-ef8633872283" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547282 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4cae00-3e87-4f4b-9222-ef8633872283" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547452 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec96a81-f6e1-4b8d-8955-cb7e63ae243d" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547473 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef94c46-4f20-4956-a75a-ae044c4c64a9" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547486 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1187ee-bb5c-4107-87b8-eee20cc5ef51" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547497 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4cae00-3e87-4f4b-9222-ef8633872283" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547513 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" containerName="mariadb-database-create" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.547545 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafaadec-e27a-42a8-86e3-b128add5edc3" containerName="mariadb-account-create-update" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.548161 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.549889 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.558578 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-grkl6" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.559747 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds"] Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.561427 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.743305 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.743469 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.743504 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnw8\" (UniqueName: \"kubernetes.io/projected/4bd938d1-8621-4b3b-acad-d28619d75ca3-kube-api-access-qtnw8\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.844671 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.844719 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnw8\" (UniqueName: \"kubernetes.io/projected/4bd938d1-8621-4b3b-acad-d28619d75ca3-kube-api-access-qtnw8\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.844766 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.848904 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.850071 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:43 crc kubenswrapper[5001]: I0128 17:42:43.862816 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnw8\" (UniqueName: \"kubernetes.io/projected/4bd938d1-8621-4b3b-acad-d28619d75ca3-kube-api-access-qtnw8\") pod \"nova-kuttl-cell0-conductor-db-sync-6xrds\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:44 crc kubenswrapper[5001]: I0128 17:42:44.162014 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:44 crc kubenswrapper[5001]: I0128 17:42:44.570262 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds"] Jan 28 17:42:44 crc kubenswrapper[5001]: I0128 17:42:44.609792 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" event={"ID":"4bd938d1-8621-4b3b-acad-d28619d75ca3","Type":"ContainerStarted","Data":"ab0ab7cf8875c12acd6521d73ea1ccdde7c9bd809c24c2f5c82cb21563a6359b"} Jan 28 17:42:45 crc kubenswrapper[5001]: I0128 17:42:45.605681 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" event={"ID":"4bd938d1-8621-4b3b-acad-d28619d75ca3","Type":"ContainerStarted","Data":"4d18221118aec7722936677c613feef493599337945a733fa02cf935feb62952"} Jan 28 17:42:45 crc kubenswrapper[5001]: I0128 17:42:45.622134 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" podStartSLOduration=2.622114808 podStartE2EDuration="2.622114808s" podCreationTimestamp="2026-01-28 17:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:42:45.621938743 +0000 UTC m=+1611.789726983" watchObservedRunningTime="2026-01-28 17:42:45.622114808 +0000 UTC m=+1611.789903038" Jan 28 17:42:50 crc kubenswrapper[5001]: I0128 17:42:50.650554 5001 generic.go:334] "Generic (PLEG): container finished" podID="4bd938d1-8621-4b3b-acad-d28619d75ca3" containerID="4d18221118aec7722936677c613feef493599337945a733fa02cf935feb62952" exitCode=0 Jan 28 17:42:50 crc kubenswrapper[5001]: I0128 17:42:50.650763 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" event={"ID":"4bd938d1-8621-4b3b-acad-d28619d75ca3","Type":"ContainerDied","Data":"4d18221118aec7722936677c613feef493599337945a733fa02cf935feb62952"} Jan 28 17:42:51 crc kubenswrapper[5001]: I0128 17:42:51.969120 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.062288 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-scripts\") pod \"4bd938d1-8621-4b3b-acad-d28619d75ca3\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.062649 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtnw8\" (UniqueName: \"kubernetes.io/projected/4bd938d1-8621-4b3b-acad-d28619d75ca3-kube-api-access-qtnw8\") pod \"4bd938d1-8621-4b3b-acad-d28619d75ca3\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.062759 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-config-data\") pod \"4bd938d1-8621-4b3b-acad-d28619d75ca3\" (UID: \"4bd938d1-8621-4b3b-acad-d28619d75ca3\") " Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.067483 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-scripts" (OuterVolumeSpecName: "scripts") pod "4bd938d1-8621-4b3b-acad-d28619d75ca3" (UID: "4bd938d1-8621-4b3b-acad-d28619d75ca3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.068870 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd938d1-8621-4b3b-acad-d28619d75ca3-kube-api-access-qtnw8" (OuterVolumeSpecName: "kube-api-access-qtnw8") pod "4bd938d1-8621-4b3b-acad-d28619d75ca3" (UID: "4bd938d1-8621-4b3b-acad-d28619d75ca3"). InnerVolumeSpecName "kube-api-access-qtnw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.085372 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-config-data" (OuterVolumeSpecName: "config-data") pod "4bd938d1-8621-4b3b-acad-d28619d75ca3" (UID: "4bd938d1-8621-4b3b-acad-d28619d75ca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.165253 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.165304 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd938d1-8621-4b3b-acad-d28619d75ca3-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.165316 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtnw8\" (UniqueName: \"kubernetes.io/projected/4bd938d1-8621-4b3b-acad-d28619d75ca3-kube-api-access-qtnw8\") on node \"crc\" DevicePath \"\"" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.670260 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" event={"ID":"4bd938d1-8621-4b3b-acad-d28619d75ca3","Type":"ContainerDied","Data":"ab0ab7cf8875c12acd6521d73ea1ccdde7c9bd809c24c2f5c82cb21563a6359b"} Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.670578 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab0ab7cf8875c12acd6521d73ea1ccdde7c9bd809c24c2f5c82cb21563a6359b" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.670686 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.749626 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:42:52 crc kubenswrapper[5001]: E0128 17:42:52.749961 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd938d1-8621-4b3b-acad-d28619d75ca3" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.749995 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd938d1-8621-4b3b-acad-d28619d75ca3" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.750147 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd938d1-8621-4b3b-acad-d28619d75ca3" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.750660 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.753552 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-grkl6" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.753598 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.764294 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.876728 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crtg\" (UniqueName: \"kubernetes.io/projected/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-kube-api-access-5crtg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.876933 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.978799 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5crtg\" (UniqueName: \"kubernetes.io/projected/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-kube-api-access-5crtg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.978863 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:52 crc kubenswrapper[5001]: I0128 17:42:52.982312 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:53 crc kubenswrapper[5001]: I0128 17:42:53.027730 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5crtg\" (UniqueName: \"kubernetes.io/projected/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-kube-api-access-5crtg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:53 crc kubenswrapper[5001]: I0128 17:42:53.065773 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:53 crc kubenswrapper[5001]: I0128 17:42:53.519818 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:42:53 crc kubenswrapper[5001]: I0128 17:42:53.681756 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"b231e9f8-fe36-43e3-978a-bf5d8059f9b6","Type":"ContainerStarted","Data":"1801023c937f1689623c3612675eec87b1fc8570454273bc4073ffb9a117b08a"} Jan 28 17:42:53 crc kubenswrapper[5001]: I0128 17:42:53.682008 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:53 crc kubenswrapper[5001]: I0128 17:42:53.705447 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=1.7054243740000001 podStartE2EDuration="1.705424374s" podCreationTimestamp="2026-01-28 17:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:42:53.69798732 +0000 UTC m=+1619.865775560" watchObservedRunningTime="2026-01-28 17:42:53.705424374 +0000 UTC m=+1619.873212604" Jan 28 17:42:54 crc kubenswrapper[5001]: I0128 17:42:54.690854 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"b231e9f8-fe36-43e3-978a-bf5d8059f9b6","Type":"ContainerStarted","Data":"ffc227f3f6bd49fe3ac26f67027840cb1e71231938aa17b87dae9a60082f6299"} Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.119794 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.585101 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd"] Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.586048 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.588812 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.589495 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.604856 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd"] Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.698550 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-config-data\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.698678 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-scripts\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.699477 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zx26\" (UniqueName: \"kubernetes.io/projected/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-kube-api-access-8zx26\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.800814 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-config-data\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.800882 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-scripts\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.800931 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zx26\" (UniqueName: \"kubernetes.io/projected/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-kube-api-access-8zx26\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.807656 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-scripts\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.807729 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-config-data\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.826755 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zx26\" (UniqueName: \"kubernetes.io/projected/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-kube-api-access-8zx26\") pod \"nova-kuttl-cell0-cell-mapping-mphcd\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.861500 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.864277 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.874812 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.893452 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.917658 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.951237 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.952793 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.955497 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:42:58 crc kubenswrapper[5001]: I0128 17:42:58.961039 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.008115 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.008951 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcw6b\" (UniqueName: \"kubernetes.io/projected/1b50955b-6736-4195-b5a7-b79ce334c2b6-kube-api-access-rcw6b\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.009027 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50955b-6736-4195-b5a7-b79ce334c2b6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.009061 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50955b-6736-4195-b5a7-b79ce334c2b6-logs\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.009408 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.011327 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.039037 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.104856 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.106725 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115182 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50955b-6736-4195-b5a7-b79ce334c2b6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115249 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50955b-6736-4195-b5a7-b79ce334c2b6-logs\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115290 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0354446b-b372-4934-bf4e-43ecd798ca5c-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115343 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms89j\" (UniqueName: \"kubernetes.io/projected/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-kube-api-access-ms89j\") pod \"nova-kuttl-scheduler-0\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115368 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jrt\" (UniqueName: \"kubernetes.io/projected/0354446b-b372-4934-bf4e-43ecd798ca5c-kube-api-access-t8jrt\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115397 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.115486 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcw6b\" (UniqueName: \"kubernetes.io/projected/1b50955b-6736-4195-b5a7-b79ce334c2b6-kube-api-access-rcw6b\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.116469 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50955b-6736-4195-b5a7-b79ce334c2b6-logs\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.117666 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.123875 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50955b-6736-4195-b5a7-b79ce334c2b6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.125179 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.150035 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcw6b\" (UniqueName: \"kubernetes.io/projected/1b50955b-6736-4195-b5a7-b79ce334c2b6-kube-api-access-rcw6b\") pod \"nova-kuttl-api-0\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.206699 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218112 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218384 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0354446b-b372-4934-bf4e-43ecd798ca5c-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218488 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218550 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ncvb\" (UniqueName: \"kubernetes.io/projected/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-kube-api-access-5ncvb\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218631 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms89j\" (UniqueName: \"kubernetes.io/projected/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-kube-api-access-ms89j\") pod \"nova-kuttl-scheduler-0\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218691 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8jrt\" (UniqueName: \"kubernetes.io/projected/0354446b-b372-4934-bf4e-43ecd798ca5c-kube-api-access-t8jrt\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.218725 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.222152 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0354446b-b372-4934-bf4e-43ecd798ca5c-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.224109 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.239384 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms89j\" (UniqueName: \"kubernetes.io/projected/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-kube-api-access-ms89j\") pod \"nova-kuttl-scheduler-0\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.239701 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8jrt\" (UniqueName: \"kubernetes.io/projected/0354446b-b372-4934-bf4e-43ecd798ca5c-kube-api-access-t8jrt\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.320547 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.320616 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ncvb\" (UniqueName: \"kubernetes.io/projected/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-kube-api-access-5ncvb\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.320727 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.321677 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.328684 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.339692 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ncvb\" (UniqueName: \"kubernetes.io/projected/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-kube-api-access-5ncvb\") pod \"nova-kuttl-metadata-0\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.348567 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.362386 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.468366 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.486091 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.622957 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.624939 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.628655 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.628874 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.649256 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc"] Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.694321 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: W0128 17:42:59.696851 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b50955b_6736_4195_b5a7_b79ce334c2b6.slice/crio-e39a2aa48c124a853db0efeb239b0379af049733dba67b01468e89beed35f7fd WatchSource:0}: Error finding container e39a2aa48c124a853db0efeb239b0379af049733dba67b01468e89beed35f7fd: Status 404 returned error can't find the container with id e39a2aa48c124a853db0efeb239b0379af049733dba67b01468e89beed35f7fd Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.735857 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs58j\" (UniqueName: \"kubernetes.io/projected/bf809582-ac7b-428b-9d93-9724bc2edccf-kube-api-access-vs58j\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.735992 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.736073 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.738029 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1b50955b-6736-4195-b5a7-b79ce334c2b6","Type":"ContainerStarted","Data":"e39a2aa48c124a853db0efeb239b0379af049733dba67b01468e89beed35f7fd"} Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.740904 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" event={"ID":"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366","Type":"ContainerStarted","Data":"01a2181525f63ee7a9078e0054de0a3ef283c385b29963ed9a7684f3263ea222"} Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.837539 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.837631 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs58j\" (UniqueName: \"kubernetes.io/projected/bf809582-ac7b-428b-9d93-9724bc2edccf-kube-api-access-vs58j\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.837697 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.843315 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.844145 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.857491 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs58j\" (UniqueName: \"kubernetes.io/projected/bf809582-ac7b-428b-9d93-9724bc2edccf-kube-api-access-vs58j\") pod \"nova-kuttl-cell1-conductor-db-sync-6xjbc\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.900410 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:42:59 crc kubenswrapper[5001]: W0128 17:42:59.907728 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ed8b2be_7627_4ec5_b650_67cb5e6ba670.slice/crio-cb776701aaac46f047574dd7f3b4998b088baf5f660da209fb6d60fd07b4211a WatchSource:0}: Error finding container cb776701aaac46f047574dd7f3b4998b088baf5f660da209fb6d60fd07b4211a: Status 404 returned error can't find the container with id cb776701aaac46f047574dd7f3b4998b088baf5f660da209fb6d60fd07b4211a Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.966065 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:42:59 crc kubenswrapper[5001]: I0128 17:42:59.994502 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:43:00 crc kubenswrapper[5001]: W0128 17:43:00.010651 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0354446b_b372_4934_bf4e_43ecd798ca5c.slice/crio-f35c3cf58878e86cda9cbeb754ea5c4d48161e21c8d702a7091fa59c47c80cdd WatchSource:0}: Error finding container f35c3cf58878e86cda9cbeb754ea5c4d48161e21c8d702a7091fa59c47c80cdd: Status 404 returned error can't find the container with id f35c3cf58878e86cda9cbeb754ea5c4d48161e21c8d702a7091fa59c47c80cdd Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.098107 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:00 crc kubenswrapper[5001]: W0128 17:43:00.107007 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb19e3f7_f1ae_4c53_b89f_e14733cac87e.slice/crio-e0df1cf16769a7bb1a8ffbe4306208a1950376171ddce683d94fc881cda5a8db WatchSource:0}: Error finding container e0df1cf16769a7bb1a8ffbe4306208a1950376171ddce683d94fc881cda5a8db: Status 404 returned error can't find the container with id e0df1cf16769a7bb1a8ffbe4306208a1950376171ddce683d94fc881cda5a8db Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.428374 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc"] Jan 28 17:43:00 crc kubenswrapper[5001]: W0128 17:43:00.430361 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf809582_ac7b_428b_9d93_9724bc2edccf.slice/crio-cbefd297298a34a22a87768dca6c8c25ba1434535ce9c88d637bf29d61a054f8 WatchSource:0}: Error finding container cbefd297298a34a22a87768dca6c8c25ba1434535ce9c88d637bf29d61a054f8: Status 404 returned error can't find the container with id cbefd297298a34a22a87768dca6c8c25ba1434535ce9c88d637bf29d61a054f8 Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.757908 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" event={"ID":"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366","Type":"ContainerStarted","Data":"991da8b045406b2b7509928bd3ef2787781c60fc39634293a913840f32d8aba3"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.767938 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"0354446b-b372-4934-bf4e-43ecd798ca5c","Type":"ContainerStarted","Data":"589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.768006 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"0354446b-b372-4934-bf4e-43ecd798ca5c","Type":"ContainerStarted","Data":"f35c3cf58878e86cda9cbeb754ea5c4d48161e21c8d702a7091fa59c47c80cdd"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.773950 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" podStartSLOduration=2.773936138 podStartE2EDuration="2.773936138s" podCreationTimestamp="2026-01-28 17:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:00.772825396 +0000 UTC m=+1626.940613636" watchObservedRunningTime="2026-01-28 17:43:00.773936138 +0000 UTC m=+1626.941724368" Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.778885 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"4ed8b2be-7627-4ec5-b650-67cb5e6ba670","Type":"ContainerStarted","Data":"2dbf75a9bafdfd01ffa7ef032837d070e7c0ea02510da4fefc2b78891524d896"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.778935 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"4ed8b2be-7627-4ec5-b650-67cb5e6ba670","Type":"ContainerStarted","Data":"cb776701aaac46f047574dd7f3b4998b088baf5f660da209fb6d60fd07b4211a"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.788815 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" event={"ID":"bf809582-ac7b-428b-9d93-9724bc2edccf","Type":"ContainerStarted","Data":"e7e53d0054f79d61859e34f737ea5129036a6aad0096643481ddfe00c79500e1"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.788867 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" event={"ID":"bf809582-ac7b-428b-9d93-9724bc2edccf","Type":"ContainerStarted","Data":"cbefd297298a34a22a87768dca6c8c25ba1434535ce9c88d637bf29d61a054f8"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.789082 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.789066455 podStartE2EDuration="2.789066455s" podCreationTimestamp="2026-01-28 17:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:00.788496189 +0000 UTC m=+1626.956284419" watchObservedRunningTime="2026-01-28 17:43:00.789066455 +0000 UTC m=+1626.956854685" Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.797335 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"fb19e3f7-f1ae-4c53-b89f-e14733cac87e","Type":"ContainerStarted","Data":"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.797373 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"fb19e3f7-f1ae-4c53-b89f-e14733cac87e","Type":"ContainerStarted","Data":"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.797384 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"fb19e3f7-f1ae-4c53-b89f-e14733cac87e","Type":"ContainerStarted","Data":"e0df1cf16769a7bb1a8ffbe4306208a1950376171ddce683d94fc881cda5a8db"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.803022 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1b50955b-6736-4195-b5a7-b79ce334c2b6","Type":"ContainerStarted","Data":"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.803055 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1b50955b-6736-4195-b5a7-b79ce334c2b6","Type":"ContainerStarted","Data":"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c"} Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.816812 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.816791436 podStartE2EDuration="2.816791436s" podCreationTimestamp="2026-01-28 17:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:00.808139426 +0000 UTC m=+1626.975927666" watchObservedRunningTime="2026-01-28 17:43:00.816791436 +0000 UTC m=+1626.984579686" Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.887391 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.8873743539999999 podStartE2EDuration="1.887374354s" podCreationTimestamp="2026-01-28 17:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:00.850465458 +0000 UTC m=+1627.018253688" watchObservedRunningTime="2026-01-28 17:43:00.887374354 +0000 UTC m=+1627.055162584" Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.892619 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.892606635 podStartE2EDuration="2.892606635s" podCreationTimestamp="2026-01-28 17:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:00.829684828 +0000 UTC m=+1626.997473068" watchObservedRunningTime="2026-01-28 17:43:00.892606635 +0000 UTC m=+1627.060394865" Jan 28 17:43:00 crc kubenswrapper[5001]: I0128 17:43:00.908910 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" podStartSLOduration=1.908885995 podStartE2EDuration="1.908885995s" podCreationTimestamp="2026-01-28 17:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:00.876194951 +0000 UTC m=+1627.043983191" watchObservedRunningTime="2026-01-28 17:43:00.908885995 +0000 UTC m=+1627.076674235" Jan 28 17:43:03 crc kubenswrapper[5001]: I0128 17:43:03.828934 5001 generic.go:334] "Generic (PLEG): container finished" podID="bf809582-ac7b-428b-9d93-9724bc2edccf" containerID="e7e53d0054f79d61859e34f737ea5129036a6aad0096643481ddfe00c79500e1" exitCode=0 Jan 28 17:43:03 crc kubenswrapper[5001]: I0128 17:43:03.829028 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" event={"ID":"bf809582-ac7b-428b-9d93-9724bc2edccf","Type":"ContainerDied","Data":"e7e53d0054f79d61859e34f737ea5129036a6aad0096643481ddfe00c79500e1"} Jan 28 17:43:04 crc kubenswrapper[5001]: I0128 17:43:04.349058 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:04 crc kubenswrapper[5001]: I0128 17:43:04.363430 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:43:04 crc kubenswrapper[5001]: I0128 17:43:04.469004 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:04 crc kubenswrapper[5001]: I0128 17:43:04.469098 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:04 crc kubenswrapper[5001]: I0128 17:43:04.835211 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:43:04 crc kubenswrapper[5001]: I0128 17:43:04.835839 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.163044 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.331686 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-config-data\") pod \"bf809582-ac7b-428b-9d93-9724bc2edccf\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.331761 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-scripts\") pod \"bf809582-ac7b-428b-9d93-9724bc2edccf\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.331888 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs58j\" (UniqueName: \"kubernetes.io/projected/bf809582-ac7b-428b-9d93-9724bc2edccf-kube-api-access-vs58j\") pod \"bf809582-ac7b-428b-9d93-9724bc2edccf\" (UID: \"bf809582-ac7b-428b-9d93-9724bc2edccf\") " Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.337670 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-scripts" (OuterVolumeSpecName: "scripts") pod "bf809582-ac7b-428b-9d93-9724bc2edccf" (UID: "bf809582-ac7b-428b-9d93-9724bc2edccf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.339203 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf809582-ac7b-428b-9d93-9724bc2edccf-kube-api-access-vs58j" (OuterVolumeSpecName: "kube-api-access-vs58j") pod "bf809582-ac7b-428b-9d93-9724bc2edccf" (UID: "bf809582-ac7b-428b-9d93-9724bc2edccf"). InnerVolumeSpecName "kube-api-access-vs58j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.356147 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-config-data" (OuterVolumeSpecName: "config-data") pod "bf809582-ac7b-428b-9d93-9724bc2edccf" (UID: "bf809582-ac7b-428b-9d93-9724bc2edccf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.433997 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.434037 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf809582-ac7b-428b-9d93-9724bc2edccf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.434054 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs58j\" (UniqueName: \"kubernetes.io/projected/bf809582-ac7b-428b-9d93-9724bc2edccf-kube-api-access-vs58j\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.851687 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" event={"ID":"bf809582-ac7b-428b-9d93-9724bc2edccf","Type":"ContainerDied","Data":"cbefd297298a34a22a87768dca6c8c25ba1434535ce9c88d637bf29d61a054f8"} Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.852044 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbefd297298a34a22a87768dca6c8c25ba1434535ce9c88d637bf29d61a054f8" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.851741 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.854243 5001 generic.go:334] "Generic (PLEG): container finished" podID="23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" containerID="991da8b045406b2b7509928bd3ef2787781c60fc39634293a913840f32d8aba3" exitCode=0 Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.854292 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" event={"ID":"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366","Type":"ContainerDied","Data":"991da8b045406b2b7509928bd3ef2787781c60fc39634293a913840f32d8aba3"} Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.926736 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:43:05 crc kubenswrapper[5001]: E0128 17:43:05.927061 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf809582-ac7b-428b-9d93-9724bc2edccf" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.927076 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf809582-ac7b-428b-9d93-9724bc2edccf" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.927232 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf809582-ac7b-428b-9d93-9724bc2edccf" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.927722 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.930309 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:43:05 crc kubenswrapper[5001]: I0128 17:43:05.936202 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.041243 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9cp\" (UniqueName: \"kubernetes.io/projected/e2987317-e3b0-4dce-89ca-cab188e4098e-kube-api-access-gq9cp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.041320 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2987317-e3b0-4dce-89ca-cab188e4098e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.142717 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9cp\" (UniqueName: \"kubernetes.io/projected/e2987317-e3b0-4dce-89ca-cab188e4098e-kube-api-access-gq9cp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.142813 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2987317-e3b0-4dce-89ca-cab188e4098e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.153231 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2987317-e3b0-4dce-89ca-cab188e4098e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.161301 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9cp\" (UniqueName: \"kubernetes.io/projected/e2987317-e3b0-4dce-89ca-cab188e4098e-kube-api-access-gq9cp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.242907 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.694967 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:43:06 crc kubenswrapper[5001]: I0128 17:43:06.863853 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"e2987317-e3b0-4dce-89ca-cab188e4098e","Type":"ContainerStarted","Data":"09aa19b80e20e4aca0091a90db9c07946fd43cc07f26fdfab9a3b4aa87ca53ad"} Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.132970 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.264443 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-config-data\") pod \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.264520 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-scripts\") pod \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.264712 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zx26\" (UniqueName: \"kubernetes.io/projected/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-kube-api-access-8zx26\") pod \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\" (UID: \"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366\") " Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.273360 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-kube-api-access-8zx26" (OuterVolumeSpecName: "kube-api-access-8zx26") pod "23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" (UID: "23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366"). InnerVolumeSpecName "kube-api-access-8zx26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.274075 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-scripts" (OuterVolumeSpecName: "scripts") pod "23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" (UID: "23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.330193 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-config-data" (OuterVolumeSpecName: "config-data") pod "23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" (UID: "23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.367060 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.367095 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zx26\" (UniqueName: \"kubernetes.io/projected/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-kube-api-access-8zx26\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.367107 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.874518 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"e2987317-e3b0-4dce-89ca-cab188e4098e","Type":"ContainerStarted","Data":"be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3"} Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.874889 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.877321 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" event={"ID":"23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366","Type":"ContainerDied","Data":"01a2181525f63ee7a9078e0054de0a3ef283c385b29963ed9a7684f3263ea222"} Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.877448 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a2181525f63ee7a9078e0054de0a3ef283c385b29963ed9a7684f3263ea222" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.877366 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd" Jan 28 17:43:07 crc kubenswrapper[5001]: I0128 17:43:07.893958 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.893832276 podStartE2EDuration="2.893832276s" podCreationTimestamp="2026-01-28 17:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:07.887705309 +0000 UTC m=+1634.055493559" watchObservedRunningTime="2026-01-28 17:43:07.893832276 +0000 UTC m=+1634.061620506" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.122124 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.123121 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-log" containerID="cri-o://b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c" gracePeriod=30 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.123224 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-api" containerID="cri-o://f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a" gracePeriod=30 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.131585 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.131783 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="4ed8b2be-7627-4ec5-b650-67cb5e6ba670" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://2dbf75a9bafdfd01ffa7ef032837d070e7c0ea02510da4fefc2b78891524d896" gracePeriod=30 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.215655 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.216049 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d" gracePeriod=30 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.216284 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-log" containerID="cri-o://5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d" gracePeriod=30 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.638374 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.685958 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50955b-6736-4195-b5a7-b79ce334c2b6-logs\") pod \"1b50955b-6736-4195-b5a7-b79ce334c2b6\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.686106 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcw6b\" (UniqueName: \"kubernetes.io/projected/1b50955b-6736-4195-b5a7-b79ce334c2b6-kube-api-access-rcw6b\") pod \"1b50955b-6736-4195-b5a7-b79ce334c2b6\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.686196 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50955b-6736-4195-b5a7-b79ce334c2b6-config-data\") pod \"1b50955b-6736-4195-b5a7-b79ce334c2b6\" (UID: \"1b50955b-6736-4195-b5a7-b79ce334c2b6\") " Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.688065 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b50955b-6736-4195-b5a7-b79ce334c2b6-logs" (OuterVolumeSpecName: "logs") pod "1b50955b-6736-4195-b5a7-b79ce334c2b6" (UID: "1b50955b-6736-4195-b5a7-b79ce334c2b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.696130 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b50955b-6736-4195-b5a7-b79ce334c2b6-kube-api-access-rcw6b" (OuterVolumeSpecName: "kube-api-access-rcw6b") pod "1b50955b-6736-4195-b5a7-b79ce334c2b6" (UID: "1b50955b-6736-4195-b5a7-b79ce334c2b6"). InnerVolumeSpecName "kube-api-access-rcw6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.714112 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b50955b-6736-4195-b5a7-b79ce334c2b6-config-data" (OuterVolumeSpecName: "config-data") pod "1b50955b-6736-4195-b5a7-b79ce334c2b6" (UID: "1b50955b-6736-4195-b5a7-b79ce334c2b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.735591 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.787859 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-logs\") pod \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.787933 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ncvb\" (UniqueName: \"kubernetes.io/projected/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-kube-api-access-5ncvb\") pod \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.788048 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-config-data\") pod \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\" (UID: \"fb19e3f7-f1ae-4c53-b89f-e14733cac87e\") " Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.788281 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-logs" (OuterVolumeSpecName: "logs") pod "fb19e3f7-f1ae-4c53-b89f-e14733cac87e" (UID: "fb19e3f7-f1ae-4c53-b89f-e14733cac87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.788294 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcw6b\" (UniqueName: \"kubernetes.io/projected/1b50955b-6736-4195-b5a7-b79ce334c2b6-kube-api-access-rcw6b\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.788363 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50955b-6736-4195-b5a7-b79ce334c2b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.788378 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50955b-6736-4195-b5a7-b79ce334c2b6-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.791572 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-kube-api-access-5ncvb" (OuterVolumeSpecName: "kube-api-access-5ncvb") pod "fb19e3f7-f1ae-4c53-b89f-e14733cac87e" (UID: "fb19e3f7-f1ae-4c53-b89f-e14733cac87e"). InnerVolumeSpecName "kube-api-access-5ncvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.807868 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-config-data" (OuterVolumeSpecName: "config-data") pod "fb19e3f7-f1ae-4c53-b89f-e14733cac87e" (UID: "fb19e3f7-f1ae-4c53-b89f-e14733cac87e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885578 5001 generic.go:334] "Generic (PLEG): container finished" podID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerID="48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d" exitCode=0 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885612 5001 generic.go:334] "Generic (PLEG): container finished" podID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerID="5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d" exitCode=143 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885652 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"fb19e3f7-f1ae-4c53-b89f-e14733cac87e","Type":"ContainerDied","Data":"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d"} Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885679 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"fb19e3f7-f1ae-4c53-b89f-e14733cac87e","Type":"ContainerDied","Data":"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d"} Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885689 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"fb19e3f7-f1ae-4c53-b89f-e14733cac87e","Type":"ContainerDied","Data":"e0df1cf16769a7bb1a8ffbe4306208a1950376171ddce683d94fc881cda5a8db"} Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885708 5001 scope.go:117] "RemoveContainer" containerID="48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.885827 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.888115 5001 generic.go:334] "Generic (PLEG): container finished" podID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerID="f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a" exitCode=0 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.888142 5001 generic.go:334] "Generic (PLEG): container finished" podID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerID="b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c" exitCode=143 Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.888217 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.888231 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1b50955b-6736-4195-b5a7-b79ce334c2b6","Type":"ContainerDied","Data":"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a"} Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.888270 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1b50955b-6736-4195-b5a7-b79ce334c2b6","Type":"ContainerDied","Data":"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c"} Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.888285 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1b50955b-6736-4195-b5a7-b79ce334c2b6","Type":"ContainerDied","Data":"e39a2aa48c124a853db0efeb239b0379af049733dba67b01468e89beed35f7fd"} Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.889167 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.889199 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ncvb\" (UniqueName: \"kubernetes.io/projected/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-kube-api-access-5ncvb\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.889241 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb19e3f7-f1ae-4c53-b89f-e14733cac87e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.906744 5001 scope.go:117] "RemoveContainer" containerID="5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.941180 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.942420 5001 scope.go:117] "RemoveContainer" containerID="48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d" Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.944125 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d\": container with ID starting with 48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d not found: ID does not exist" containerID="48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.944172 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d"} err="failed to get container status \"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d\": rpc error: code = NotFound desc = could not find container \"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d\": container with ID starting with 48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d not found: ID does not exist" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.944199 5001 scope.go:117] "RemoveContainer" containerID="5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d" Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.946012 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d\": container with ID starting with 5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d not found: ID does not exist" containerID="5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.946066 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d"} err="failed to get container status \"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d\": rpc error: code = NotFound desc = could not find container \"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d\": container with ID starting with 5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d not found: ID does not exist" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.946096 5001 scope.go:117] "RemoveContainer" containerID="48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.958621 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.959220 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d"} err="failed to get container status \"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d\": rpc error: code = NotFound desc = could not find container \"48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d\": container with ID starting with 48a3f71c5525c48f377de8de153e6db27f0461adbeee648e16a8ba867efa9d6d not found: ID does not exist" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.959268 5001 scope.go:117] "RemoveContainer" containerID="5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.959805 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d"} err="failed to get container status \"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d\": rpc error: code = NotFound desc = could not find container \"5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d\": container with ID starting with 5e215a1f8cae928ceb27ccd0bfa695d54fa2dca66e530ea437772e0feb593e9d not found: ID does not exist" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.959855 5001 scope.go:117] "RemoveContainer" containerID="f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.988874 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.989350 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-log" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989378 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-log" Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.989397 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-log" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989407 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-log" Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.989419 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-metadata" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989427 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-metadata" Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.989443 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-api" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989451 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-api" Jan 28 17:43:08 crc kubenswrapper[5001]: E0128 17:43:08.989464 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" containerName="nova-manage" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989472 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" containerName="nova-manage" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989673 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-api" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989692 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" containerName="nova-manage" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989705 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-metadata" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989715 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" containerName="nova-kuttl-metadata-log" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.989728 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" containerName="nova-kuttl-api-log" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.990855 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:08 crc kubenswrapper[5001]: I0128 17:43:08.996593 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.003823 5001 scope.go:117] "RemoveContainer" containerID="b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.012741 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.026112 5001 scope.go:117] "RemoveContainer" containerID="f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a" Jan 28 17:43:09 crc kubenswrapper[5001]: E0128 17:43:09.026816 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a\": container with ID starting with f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a not found: ID does not exist" containerID="f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.026882 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a"} err="failed to get container status \"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a\": rpc error: code = NotFound desc = could not find container \"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a\": container with ID starting with f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a not found: ID does not exist" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.026923 5001 scope.go:117] "RemoveContainer" containerID="b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c" Jan 28 17:43:09 crc kubenswrapper[5001]: E0128 17:43:09.027288 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c\": container with ID starting with b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c not found: ID does not exist" containerID="b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.027331 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c"} err="failed to get container status \"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c\": rpc error: code = NotFound desc = could not find container \"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c\": container with ID starting with b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c not found: ID does not exist" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.027366 5001 scope.go:117] "RemoveContainer" containerID="f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.027597 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a"} err="failed to get container status \"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a\": rpc error: code = NotFound desc = could not find container \"f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a\": container with ID starting with f9fd5f9bd445a6a338fda50850cd0c16a8b4bf67e387a3528dc91c4d0b2a583a not found: ID does not exist" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.027623 5001 scope.go:117] "RemoveContainer" containerID="b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.027808 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c"} err="failed to get container status \"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c\": rpc error: code = NotFound desc = could not find container \"b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c\": container with ID starting with b4530903c1d444c865b528f1e9b9ee0f496724d4aa0b2541487ea727803fab6c not found: ID does not exist" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.036379 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.052368 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.062303 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.064512 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.066178 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.071668 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.094519 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zrd\" (UniqueName: \"kubernetes.io/projected/75b81d82-4454-40ec-a48c-6e068ff5f850-kube-api-access-48zrd\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.094670 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b81d82-4454-40ec-a48c-6e068ff5f850-config-data\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.094701 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrps\" (UniqueName: \"kubernetes.io/projected/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-kube-api-access-7wrps\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.094733 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b81d82-4454-40ec-a48c-6e068ff5f850-logs\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.094765 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.094789 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.196170 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b81d82-4454-40ec-a48c-6e068ff5f850-logs\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.196222 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.196245 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.196286 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zrd\" (UniqueName: \"kubernetes.io/projected/75b81d82-4454-40ec-a48c-6e068ff5f850-kube-api-access-48zrd\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.196348 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b81d82-4454-40ec-a48c-6e068ff5f850-config-data\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.196382 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrps\" (UniqueName: \"kubernetes.io/projected/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-kube-api-access-7wrps\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.197287 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b81d82-4454-40ec-a48c-6e068ff5f850-logs\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.197437 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.200410 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b81d82-4454-40ec-a48c-6e068ff5f850-config-data\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.201118 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.213865 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zrd\" (UniqueName: \"kubernetes.io/projected/75b81d82-4454-40ec-a48c-6e068ff5f850-kube-api-access-48zrd\") pod \"nova-kuttl-api-0\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.218050 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrps\" (UniqueName: \"kubernetes.io/projected/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-kube-api-access-7wrps\") pod \"nova-kuttl-metadata-0\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.314281 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.363600 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.376589 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.378906 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.837145 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: W0128 17:43:09.837844 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b7ce3bb_7d93_468f_9e75_d2371d8f6709.slice/crio-72b2def6ae9b3171c3c16ed2a6f0843bb5f5c80304a0fb0743d8435b99513276 WatchSource:0}: Error finding container 72b2def6ae9b3171c3c16ed2a6f0843bb5f5c80304a0fb0743d8435b99513276: Status 404 returned error can't find the container with id 72b2def6ae9b3171c3c16ed2a6f0843bb5f5c80304a0fb0743d8435b99513276 Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.905706 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6b7ce3bb-7d93-468f-9e75-d2371d8f6709","Type":"ContainerStarted","Data":"72b2def6ae9b3171c3c16ed2a6f0843bb5f5c80304a0fb0743d8435b99513276"} Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.920189 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:09 crc kubenswrapper[5001]: I0128 17:43:09.920427 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.602777 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b50955b-6736-4195-b5a7-b79ce334c2b6" path="/var/lib/kubelet/pods/1b50955b-6736-4195-b5a7-b79ce334c2b6/volumes" Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.603695 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb19e3f7-f1ae-4c53-b89f-e14733cac87e" path="/var/lib/kubelet/pods/fb19e3f7-f1ae-4c53-b89f-e14733cac87e/volumes" Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.932283 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6b7ce3bb-7d93-468f-9e75-d2371d8f6709","Type":"ContainerStarted","Data":"eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e"} Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.932590 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6b7ce3bb-7d93-468f-9e75-d2371d8f6709","Type":"ContainerStarted","Data":"26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a"} Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.935086 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"75b81d82-4454-40ec-a48c-6e068ff5f850","Type":"ContainerStarted","Data":"42568c46af2097a3732b1ecd20b5392fdff269f4405f489f6af96cc6f2b549dd"} Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.935122 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"75b81d82-4454-40ec-a48c-6e068ff5f850","Type":"ContainerStarted","Data":"d52aff9fa201080af05ef884935c52a36e51dfd2858486dd6183d7ff0292f1a2"} Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.935135 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"75b81d82-4454-40ec-a48c-6e068ff5f850","Type":"ContainerStarted","Data":"bf910820e64d6b33abe57a7d1bfa2c7e5903a6a661d765d89268ae4ddebd0fbc"} Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.952324 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.9523034790000002 podStartE2EDuration="2.952303479s" podCreationTimestamp="2026-01-28 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:10.948620343 +0000 UTC m=+1637.116408573" watchObservedRunningTime="2026-01-28 17:43:10.952303479 +0000 UTC m=+1637.120091719" Jan 28 17:43:10 crc kubenswrapper[5001]: I0128 17:43:10.984266 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.9842493709999998 podStartE2EDuration="2.984249371s" podCreationTimestamp="2026-01-28 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:10.97763651 +0000 UTC m=+1637.145424760" watchObservedRunningTime="2026-01-28 17:43:10.984249371 +0000 UTC m=+1637.152037601" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.268114 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.848996 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw"] Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.850610 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.854822 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.863199 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw"] Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.870730 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.944909 5001 generic.go:334] "Generic (PLEG): container finished" podID="4ed8b2be-7627-4ec5-b650-67cb5e6ba670" containerID="2dbf75a9bafdfd01ffa7ef032837d070e7c0ea02510da4fefc2b78891524d896" exitCode=0 Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.945062 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"4ed8b2be-7627-4ec5-b650-67cb5e6ba670","Type":"ContainerDied","Data":"2dbf75a9bafdfd01ffa7ef032837d070e7c0ea02510da4fefc2b78891524d896"} Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.998127 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpfgb\" (UniqueName: \"kubernetes.io/projected/1d31e229-52c5-4f77-a446-4d54bc3a75af-kube-api-access-wpfgb\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.998187 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-config-data\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:11 crc kubenswrapper[5001]: I0128 17:43:11.998514 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-scripts\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.100585 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-scripts\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.100699 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpfgb\" (UniqueName: \"kubernetes.io/projected/1d31e229-52c5-4f77-a446-4d54bc3a75af-kube-api-access-wpfgb\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.100744 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-config-data\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.105848 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-scripts\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.105888 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-config-data\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.116421 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpfgb\" (UniqueName: \"kubernetes.io/projected/1d31e229-52c5-4f77-a446-4d54bc3a75af-kube-api-access-wpfgb\") pod \"nova-kuttl-cell1-cell-mapping-827fw\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.175244 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.273285 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.406184 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms89j\" (UniqueName: \"kubernetes.io/projected/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-kube-api-access-ms89j\") pod \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.406324 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-config-data\") pod \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\" (UID: \"4ed8b2be-7627-4ec5-b650-67cb5e6ba670\") " Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.410820 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-kube-api-access-ms89j" (OuterVolumeSpecName: "kube-api-access-ms89j") pod "4ed8b2be-7627-4ec5-b650-67cb5e6ba670" (UID: "4ed8b2be-7627-4ec5-b650-67cb5e6ba670"). InnerVolumeSpecName "kube-api-access-ms89j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.426929 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-config-data" (OuterVolumeSpecName: "config-data") pod "4ed8b2be-7627-4ec5-b650-67cb5e6ba670" (UID: "4ed8b2be-7627-4ec5-b650-67cb5e6ba670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.508494 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms89j\" (UniqueName: \"kubernetes.io/projected/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-kube-api-access-ms89j\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.508538 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed8b2be-7627-4ec5-b650-67cb5e6ba670-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.645465 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw"] Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.956521 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"4ed8b2be-7627-4ec5-b650-67cb5e6ba670","Type":"ContainerDied","Data":"cb776701aaac46f047574dd7f3b4998b088baf5f660da209fb6d60fd07b4211a"} Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.956934 5001 scope.go:117] "RemoveContainer" containerID="2dbf75a9bafdfd01ffa7ef032837d070e7c0ea02510da4fefc2b78891524d896" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.956542 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.961901 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" event={"ID":"1d31e229-52c5-4f77-a446-4d54bc3a75af","Type":"ContainerStarted","Data":"6e3b8d29fe13432be73e148d9b8fe27603f143e4dd486be7e5d7fe5a3baa25c6"} Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.961961 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" event={"ID":"1d31e229-52c5-4f77-a446-4d54bc3a75af","Type":"ContainerStarted","Data":"0f0aa011c4d86af412393e17c293f418feeb2e2e5a95a0801d45963b93900fcc"} Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.979312 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" podStartSLOduration=1.979294329 podStartE2EDuration="1.979294329s" podCreationTimestamp="2026-01-28 17:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:12.975695125 +0000 UTC m=+1639.143483355" watchObservedRunningTime="2026-01-28 17:43:12.979294329 +0000 UTC m=+1639.147082559" Jan 28 17:43:12 crc kubenswrapper[5001]: I0128 17:43:12.996366 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.003647 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.014783 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:13 crc kubenswrapper[5001]: E0128 17:43:13.015202 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed8b2be-7627-4ec5-b650-67cb5e6ba670" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.015225 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed8b2be-7627-4ec5-b650-67cb5e6ba670" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.015422 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed8b2be-7627-4ec5-b650-67cb5e6ba670" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.016069 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.022806 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.024569 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.116809 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hxb\" (UniqueName: \"kubernetes.io/projected/a1f9da2d-cc59-46f6-b708-194074720df7-kube-api-access-27hxb\") pod \"nova-kuttl-scheduler-0\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.117026 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f9da2d-cc59-46f6-b708-194074720df7-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.219559 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27hxb\" (UniqueName: \"kubernetes.io/projected/a1f9da2d-cc59-46f6-b708-194074720df7-kube-api-access-27hxb\") pod \"nova-kuttl-scheduler-0\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.219733 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f9da2d-cc59-46f6-b708-194074720df7-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.224197 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f9da2d-cc59-46f6-b708-194074720df7-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.240011 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27hxb\" (UniqueName: \"kubernetes.io/projected/a1f9da2d-cc59-46f6-b708-194074720df7-kube-api-access-27hxb\") pod \"nova-kuttl-scheduler-0\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.364422 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.805904 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:13 crc kubenswrapper[5001]: I0128 17:43:13.970102 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a1f9da2d-cc59-46f6-b708-194074720df7","Type":"ContainerStarted","Data":"b84c5a4bd173fb02eb182b913b2bc43600ecf333045997ee9b552b21abac4500"} Jan 28 17:43:14 crc kubenswrapper[5001]: I0128 17:43:14.314906 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:14 crc kubenswrapper[5001]: I0128 17:43:14.315182 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:14 crc kubenswrapper[5001]: I0128 17:43:14.606089 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed8b2be-7627-4ec5-b650-67cb5e6ba670" path="/var/lib/kubelet/pods/4ed8b2be-7627-4ec5-b650-67cb5e6ba670/volumes" Jan 28 17:43:14 crc kubenswrapper[5001]: I0128 17:43:14.978918 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a1f9da2d-cc59-46f6-b708-194074720df7","Type":"ContainerStarted","Data":"541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f"} Jan 28 17:43:14 crc kubenswrapper[5001]: I0128 17:43:14.997367 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.99734962 podStartE2EDuration="2.99734962s" podCreationTimestamp="2026-01-28 17:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:14.994844377 +0000 UTC m=+1641.162632617" watchObservedRunningTime="2026-01-28 17:43:14.99734962 +0000 UTC m=+1641.165137850" Jan 28 17:43:18 crc kubenswrapper[5001]: I0128 17:43:18.009303 5001 generic.go:334] "Generic (PLEG): container finished" podID="1d31e229-52c5-4f77-a446-4d54bc3a75af" containerID="6e3b8d29fe13432be73e148d9b8fe27603f143e4dd486be7e5d7fe5a3baa25c6" exitCode=0 Jan 28 17:43:18 crc kubenswrapper[5001]: I0128 17:43:18.009414 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" event={"ID":"1d31e229-52c5-4f77-a446-4d54bc3a75af","Type":"ContainerDied","Data":"6e3b8d29fe13432be73e148d9b8fe27603f143e4dd486be7e5d7fe5a3baa25c6"} Jan 28 17:43:18 crc kubenswrapper[5001]: I0128 17:43:18.364876 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.315832 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.316405 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.380071 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.380383 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.412425 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.528022 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpfgb\" (UniqueName: \"kubernetes.io/projected/1d31e229-52c5-4f77-a446-4d54bc3a75af-kube-api-access-wpfgb\") pod \"1d31e229-52c5-4f77-a446-4d54bc3a75af\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.528175 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-scripts\") pod \"1d31e229-52c5-4f77-a446-4d54bc3a75af\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.528302 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-config-data\") pod \"1d31e229-52c5-4f77-a446-4d54bc3a75af\" (UID: \"1d31e229-52c5-4f77-a446-4d54bc3a75af\") " Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.533907 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-scripts" (OuterVolumeSpecName: "scripts") pod "1d31e229-52c5-4f77-a446-4d54bc3a75af" (UID: "1d31e229-52c5-4f77-a446-4d54bc3a75af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.534243 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d31e229-52c5-4f77-a446-4d54bc3a75af-kube-api-access-wpfgb" (OuterVolumeSpecName: "kube-api-access-wpfgb") pod "1d31e229-52c5-4f77-a446-4d54bc3a75af" (UID: "1d31e229-52c5-4f77-a446-4d54bc3a75af"). InnerVolumeSpecName "kube-api-access-wpfgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.555326 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-config-data" (OuterVolumeSpecName: "config-data") pod "1d31e229-52c5-4f77-a446-4d54bc3a75af" (UID: "1d31e229-52c5-4f77-a446-4d54bc3a75af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.630539 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.630722 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpfgb\" (UniqueName: \"kubernetes.io/projected/1d31e229-52c5-4f77-a446-4d54bc3a75af-kube-api-access-wpfgb\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:19 crc kubenswrapper[5001]: I0128 17:43:19.630739 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d31e229-52c5-4f77-a446-4d54bc3a75af-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.037873 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.037970 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw" event={"ID":"1d31e229-52c5-4f77-a446-4d54bc3a75af","Type":"ContainerDied","Data":"0f0aa011c4d86af412393e17c293f418feeb2e2e5a95a0801d45963b93900fcc"} Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.038019 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0aa011c4d86af412393e17c293f418feeb2e2e5a95a0801d45963b93900fcc" Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.214777 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.215174 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-log" containerID="cri-o://d52aff9fa201080af05ef884935c52a36e51dfd2858486dd6183d7ff0292f1a2" gracePeriod=30 Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.215232 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-api" containerID="cri-o://42568c46af2097a3732b1ecd20b5392fdff269f4405f489f6af96cc6f2b549dd" gracePeriod=30 Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.220334 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.161:8774/\": EOF" Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.220395 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.161:8774/\": EOF" Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.240617 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.241079 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a1f9da2d-cc59-46f6-b708-194074720df7" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f" gracePeriod=30 Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.315616 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.356232 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:43:20 crc kubenswrapper[5001]: I0128 17:43:20.397184 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.046160 5001 generic.go:334] "Generic (PLEG): container finished" podID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerID="d52aff9fa201080af05ef884935c52a36e51dfd2858486dd6183d7ff0292f1a2" exitCode=143 Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.046246 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"75b81d82-4454-40ec-a48c-6e068ff5f850","Type":"ContainerDied","Data":"d52aff9fa201080af05ef884935c52a36e51dfd2858486dd6183d7ff0292f1a2"} Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.046417 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e" gracePeriod=30 Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.046523 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-log" containerID="cri-o://26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a" gracePeriod=30 Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.493338 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.661139 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f9da2d-cc59-46f6-b708-194074720df7-config-data\") pod \"a1f9da2d-cc59-46f6-b708-194074720df7\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.661358 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27hxb\" (UniqueName: \"kubernetes.io/projected/a1f9da2d-cc59-46f6-b708-194074720df7-kube-api-access-27hxb\") pod \"a1f9da2d-cc59-46f6-b708-194074720df7\" (UID: \"a1f9da2d-cc59-46f6-b708-194074720df7\") " Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.670337 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f9da2d-cc59-46f6-b708-194074720df7-kube-api-access-27hxb" (OuterVolumeSpecName: "kube-api-access-27hxb") pod "a1f9da2d-cc59-46f6-b708-194074720df7" (UID: "a1f9da2d-cc59-46f6-b708-194074720df7"). InnerVolumeSpecName "kube-api-access-27hxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.685625 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f9da2d-cc59-46f6-b708-194074720df7-config-data" (OuterVolumeSpecName: "config-data") pod "a1f9da2d-cc59-46f6-b708-194074720df7" (UID: "a1f9da2d-cc59-46f6-b708-194074720df7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.763481 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27hxb\" (UniqueName: \"kubernetes.io/projected/a1f9da2d-cc59-46f6-b708-194074720df7-kube-api-access-27hxb\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:21 crc kubenswrapper[5001]: I0128 17:43:21.763513 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f9da2d-cc59-46f6-b708-194074720df7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.061202 5001 generic.go:334] "Generic (PLEG): container finished" podID="a1f9da2d-cc59-46f6-b708-194074720df7" containerID="541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f" exitCode=0 Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.061247 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a1f9da2d-cc59-46f6-b708-194074720df7","Type":"ContainerDied","Data":"541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f"} Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.061329 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.061366 5001 scope.go:117] "RemoveContainer" containerID="541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.061350 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a1f9da2d-cc59-46f6-b708-194074720df7","Type":"ContainerDied","Data":"b84c5a4bd173fb02eb182b913b2bc43600ecf333045997ee9b552b21abac4500"} Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.063768 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerID="26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a" exitCode=143 Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.063799 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6b7ce3bb-7d93-468f-9e75-d2371d8f6709","Type":"ContainerDied","Data":"26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a"} Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.087112 5001 scope.go:117] "RemoveContainer" containerID="541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f" Jan 28 17:43:22 crc kubenswrapper[5001]: E0128 17:43:22.087740 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f\": container with ID starting with 541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f not found: ID does not exist" containerID="541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.087823 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f"} err="failed to get container status \"541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f\": rpc error: code = NotFound desc = could not find container \"541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f\": container with ID starting with 541a8f0c8eb2ef0ca3a6eb083ef72333b145f8b707cd4f3aaddd5d9188b35b3f not found: ID does not exist" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.105343 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.118826 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.128592 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:22 crc kubenswrapper[5001]: E0128 17:43:22.137573 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d31e229-52c5-4f77-a446-4d54bc3a75af" containerName="nova-manage" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.137627 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d31e229-52c5-4f77-a446-4d54bc3a75af" containerName="nova-manage" Jan 28 17:43:22 crc kubenswrapper[5001]: E0128 17:43:22.137673 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f9da2d-cc59-46f6-b708-194074720df7" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.137681 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f9da2d-cc59-46f6-b708-194074720df7" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.138124 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f9da2d-cc59-46f6-b708-194074720df7" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.138152 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d31e229-52c5-4f77-a446-4d54bc3a75af" containerName="nova-manage" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.138958 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.142637 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.169870 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.170579 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpb6\" (UniqueName: \"kubernetes.io/projected/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-kube-api-access-qwpb6\") pod \"nova-kuttl-scheduler-0\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.170749 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.272180 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwpb6\" (UniqueName: \"kubernetes.io/projected/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-kube-api-access-qwpb6\") pod \"nova-kuttl-scheduler-0\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.272238 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.276873 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.291060 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwpb6\" (UniqueName: \"kubernetes.io/projected/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-kube-api-access-qwpb6\") pod \"nova-kuttl-scheduler-0\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.499470 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.608082 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f9da2d-cc59-46f6-b708-194074720df7" path="/var/lib/kubelet/pods/a1f9da2d-cc59-46f6-b708-194074720df7/volumes" Jan 28 17:43:22 crc kubenswrapper[5001]: I0128 17:43:22.939886 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:43:23 crc kubenswrapper[5001]: I0128 17:43:23.074116 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d","Type":"ContainerStarted","Data":"7178cfb413c46dfabea2690958d37e1e8c3e4af7b88652b6849b986da321ffa2"} Jan 28 17:43:24 crc kubenswrapper[5001]: I0128 17:43:24.086136 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d","Type":"ContainerStarted","Data":"0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14"} Jan 28 17:43:24 crc kubenswrapper[5001]: I0128 17:43:24.104764 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.104741668 podStartE2EDuration="2.104741668s" podCreationTimestamp="2026-01-28 17:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:24.103175803 +0000 UTC m=+1650.270964023" watchObservedRunningTime="2026-01-28 17:43:24.104741668 +0000 UTC m=+1650.272529918" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.819184 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.837811 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-config-data\") pod \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.837899 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wrps\" (UniqueName: \"kubernetes.io/projected/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-kube-api-access-7wrps\") pod \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.838012 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-logs\") pod \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\" (UID: \"6b7ce3bb-7d93-468f-9e75-d2371d8f6709\") " Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.839796 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-logs" (OuterVolumeSpecName: "logs") pod "6b7ce3bb-7d93-468f-9e75-d2371d8f6709" (UID: "6b7ce3bb-7d93-468f-9e75-d2371d8f6709"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.857665 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-kube-api-access-7wrps" (OuterVolumeSpecName: "kube-api-access-7wrps") pod "6b7ce3bb-7d93-468f-9e75-d2371d8f6709" (UID: "6b7ce3bb-7d93-468f-9e75-d2371d8f6709"). InnerVolumeSpecName "kube-api-access-7wrps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.865474 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-config-data" (OuterVolumeSpecName: "config-data") pod "6b7ce3bb-7d93-468f-9e75-d2371d8f6709" (UID: "6b7ce3bb-7d93-468f-9e75-d2371d8f6709"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.939818 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.939852 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wrps\" (UniqueName: \"kubernetes.io/projected/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-kube-api-access-7wrps\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:25 crc kubenswrapper[5001]: I0128 17:43:25.939862 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b7ce3bb-7d93-468f-9e75-d2371d8f6709-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.106395 5001 generic.go:334] "Generic (PLEG): container finished" podID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerID="42568c46af2097a3732b1ecd20b5392fdff269f4405f489f6af96cc6f2b549dd" exitCode=0 Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.106477 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"75b81d82-4454-40ec-a48c-6e068ff5f850","Type":"ContainerDied","Data":"42568c46af2097a3732b1ecd20b5392fdff269f4405f489f6af96cc6f2b549dd"} Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.108479 5001 generic.go:334] "Generic (PLEG): container finished" podID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerID="eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e" exitCode=0 Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.108526 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6b7ce3bb-7d93-468f-9e75-d2371d8f6709","Type":"ContainerDied","Data":"eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e"} Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.108554 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6b7ce3bb-7d93-468f-9e75-d2371d8f6709","Type":"ContainerDied","Data":"72b2def6ae9b3171c3c16ed2a6f0843bb5f5c80304a0fb0743d8435b99513276"} Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.108570 5001 scope.go:117] "RemoveContainer" containerID="eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.108530 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.127678 5001 scope.go:117] "RemoveContainer" containerID="26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.141822 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.149200 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.161190 5001 scope.go:117] "RemoveContainer" containerID="eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e" Jan 28 17:43:26 crc kubenswrapper[5001]: E0128 17:43:26.161587 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e\": container with ID starting with eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e not found: ID does not exist" containerID="eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.161623 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e"} err="failed to get container status \"eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e\": rpc error: code = NotFound desc = could not find container \"eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e\": container with ID starting with eb9aaf420b9abe30cc9f96ef9d5e554c1cb4285ab6565c00493f657d4cf7c73e not found: ID does not exist" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.161646 5001 scope.go:117] "RemoveContainer" containerID="26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a" Jan 28 17:43:26 crc kubenswrapper[5001]: E0128 17:43:26.161869 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a\": container with ID starting with 26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a not found: ID does not exist" containerID="26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.161908 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a"} err="failed to get container status \"26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a\": rpc error: code = NotFound desc = could not find container \"26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a\": container with ID starting with 26fdd1c51e742ec5a717473db2a5e2614db355269dec46739d8b1279a549dc6a not found: ID does not exist" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.164223 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:26 crc kubenswrapper[5001]: E0128 17:43:26.164623 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-log" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.164647 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-log" Jan 28 17:43:26 crc kubenswrapper[5001]: E0128 17:43:26.164665 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-metadata" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.164674 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-metadata" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.164862 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-log" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.164894 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" containerName="nova-kuttl-metadata-metadata" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.165859 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.170559 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.199069 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.244660 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b238e8f-2244-480f-86f8-a5262f531e04-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.244718 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b238e8f-2244-480f-86f8-a5262f531e04-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.244875 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdwq\" (UniqueName: \"kubernetes.io/projected/4b238e8f-2244-480f-86f8-a5262f531e04-kube-api-access-bmdwq\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.346251 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdwq\" (UniqueName: \"kubernetes.io/projected/4b238e8f-2244-480f-86f8-a5262f531e04-kube-api-access-bmdwq\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.346431 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b238e8f-2244-480f-86f8-a5262f531e04-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.347502 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b238e8f-2244-480f-86f8-a5262f531e04-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.348068 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b238e8f-2244-480f-86f8-a5262f531e04-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.351322 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b238e8f-2244-480f-86f8-a5262f531e04-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.384333 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdwq\" (UniqueName: \"kubernetes.io/projected/4b238e8f-2244-480f-86f8-a5262f531e04-kube-api-access-bmdwq\") pod \"nova-kuttl-metadata-0\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.460861 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.498807 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.550886 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b81d82-4454-40ec-a48c-6e068ff5f850-logs\") pod \"75b81d82-4454-40ec-a48c-6e068ff5f850\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.550963 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zrd\" (UniqueName: \"kubernetes.io/projected/75b81d82-4454-40ec-a48c-6e068ff5f850-kube-api-access-48zrd\") pod \"75b81d82-4454-40ec-a48c-6e068ff5f850\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.551190 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b81d82-4454-40ec-a48c-6e068ff5f850-config-data\") pod \"75b81d82-4454-40ec-a48c-6e068ff5f850\" (UID: \"75b81d82-4454-40ec-a48c-6e068ff5f850\") " Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.551653 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b81d82-4454-40ec-a48c-6e068ff5f850-logs" (OuterVolumeSpecName: "logs") pod "75b81d82-4454-40ec-a48c-6e068ff5f850" (UID: "75b81d82-4454-40ec-a48c-6e068ff5f850"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.551784 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b81d82-4454-40ec-a48c-6e068ff5f850-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.555257 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b81d82-4454-40ec-a48c-6e068ff5f850-kube-api-access-48zrd" (OuterVolumeSpecName: "kube-api-access-48zrd") pod "75b81d82-4454-40ec-a48c-6e068ff5f850" (UID: "75b81d82-4454-40ec-a48c-6e068ff5f850"). InnerVolumeSpecName "kube-api-access-48zrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.572826 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b81d82-4454-40ec-a48c-6e068ff5f850-config-data" (OuterVolumeSpecName: "config-data") pod "75b81d82-4454-40ec-a48c-6e068ff5f850" (UID: "75b81d82-4454-40ec-a48c-6e068ff5f850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.615361 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b7ce3bb-7d93-468f-9e75-d2371d8f6709" path="/var/lib/kubelet/pods/6b7ce3bb-7d93-468f-9e75-d2371d8f6709/volumes" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.653786 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48zrd\" (UniqueName: \"kubernetes.io/projected/75b81d82-4454-40ec-a48c-6e068ff5f850-kube-api-access-48zrd\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.653816 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b81d82-4454-40ec-a48c-6e068ff5f850-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:43:26 crc kubenswrapper[5001]: I0128 17:43:26.939248 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:43:26 crc kubenswrapper[5001]: W0128 17:43:26.946701 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b238e8f_2244_480f_86f8_a5262f531e04.slice/crio-99ba66614dda6812cc447c1572f7cbbd62b6d0f4c67cfe01ff7994486e911be2 WatchSource:0}: Error finding container 99ba66614dda6812cc447c1572f7cbbd62b6d0f4c67cfe01ff7994486e911be2: Status 404 returned error can't find the container with id 99ba66614dda6812cc447c1572f7cbbd62b6d0f4c67cfe01ff7994486e911be2 Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.120612 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4b238e8f-2244-480f-86f8-a5262f531e04","Type":"ContainerStarted","Data":"fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963"} Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.120687 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4b238e8f-2244-480f-86f8-a5262f531e04","Type":"ContainerStarted","Data":"99ba66614dda6812cc447c1572f7cbbd62b6d0f4c67cfe01ff7994486e911be2"} Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.124442 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"75b81d82-4454-40ec-a48c-6e068ff5f850","Type":"ContainerDied","Data":"bf910820e64d6b33abe57a7d1bfa2c7e5903a6a661d765d89268ae4ddebd0fbc"} Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.124533 5001 scope.go:117] "RemoveContainer" containerID="42568c46af2097a3732b1ecd20b5392fdff269f4405f489f6af96cc6f2b549dd" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.124701 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.150228 5001 scope.go:117] "RemoveContainer" containerID="d52aff9fa201080af05ef884935c52a36e51dfd2858486dd6183d7ff0292f1a2" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.151148 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.162272 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.182711 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:27 crc kubenswrapper[5001]: E0128 17:43:27.183440 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-log" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.183485 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-log" Jan 28 17:43:27 crc kubenswrapper[5001]: E0128 17:43:27.183501 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-api" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.183508 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-api" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.183783 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-api" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.183809 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" containerName="nova-kuttl-api-log" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.184731 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.187174 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.191169 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.270741 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-config-data\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.270857 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbm4c\" (UniqueName: \"kubernetes.io/projected/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-kube-api-access-rbm4c\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.270900 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-logs\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.373344 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbm4c\" (UniqueName: \"kubernetes.io/projected/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-kube-api-access-rbm4c\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.373690 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-logs\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.373770 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-config-data\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.377929 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-logs\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.378510 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-config-data\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.405900 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbm4c\" (UniqueName: \"kubernetes.io/projected/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-kube-api-access-rbm4c\") pod \"nova-kuttl-api-0\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.499699 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.513677 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:27 crc kubenswrapper[5001]: I0128 17:43:27.948148 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:43:28 crc kubenswrapper[5001]: I0128 17:43:28.141040 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4b238e8f-2244-480f-86f8-a5262f531e04","Type":"ContainerStarted","Data":"95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f"} Jan 28 17:43:28 crc kubenswrapper[5001]: I0128 17:43:28.144603 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40","Type":"ContainerStarted","Data":"31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b"} Jan 28 17:43:28 crc kubenswrapper[5001]: I0128 17:43:28.144669 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40","Type":"ContainerStarted","Data":"a37e465841ea78abc331b3ce031ecd0b2b4a9e9d384c952318f97308d136e4ad"} Jan 28 17:43:28 crc kubenswrapper[5001]: I0128 17:43:28.162286 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.16226247 podStartE2EDuration="2.16226247s" podCreationTimestamp="2026-01-28 17:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:28.157373818 +0000 UTC m=+1654.325162058" watchObservedRunningTime="2026-01-28 17:43:28.16226247 +0000 UTC m=+1654.330050700" Jan 28 17:43:28 crc kubenswrapper[5001]: I0128 17:43:28.604610 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b81d82-4454-40ec-a48c-6e068ff5f850" path="/var/lib/kubelet/pods/75b81d82-4454-40ec-a48c-6e068ff5f850/volumes" Jan 28 17:43:29 crc kubenswrapper[5001]: I0128 17:43:29.167345 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40","Type":"ContainerStarted","Data":"76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2"} Jan 28 17:43:29 crc kubenswrapper[5001]: I0128 17:43:29.190828 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.190807898 podStartE2EDuration="2.190807898s" podCreationTimestamp="2026-01-28 17:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:29.186654448 +0000 UTC m=+1655.354442698" watchObservedRunningTime="2026-01-28 17:43:29.190807898 +0000 UTC m=+1655.358596138" Jan 28 17:43:31 crc kubenswrapper[5001]: I0128 17:43:31.499027 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:31 crc kubenswrapper[5001]: I0128 17:43:31.499458 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:32 crc kubenswrapper[5001]: I0128 17:43:32.500225 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:32 crc kubenswrapper[5001]: I0128 17:43:32.522340 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:33 crc kubenswrapper[5001]: I0128 17:43:33.272174 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:43:34 crc kubenswrapper[5001]: I0128 17:43:34.834031 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:43:34 crc kubenswrapper[5001]: I0128 17:43:34.834132 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:43:36 crc kubenswrapper[5001]: I0128 17:43:36.500100 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:36 crc kubenswrapper[5001]: I0128 17:43:36.500912 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:37 crc kubenswrapper[5001]: I0128 17:43:37.514438 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:37 crc kubenswrapper[5001]: I0128 17:43:37.514501 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:37 crc kubenswrapper[5001]: I0128 17:43:37.582311 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.165:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:43:37 crc kubenswrapper[5001]: I0128 17:43:37.582437 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.165:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:43:38 crc kubenswrapper[5001]: I0128 17:43:38.556366 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:43:38 crc kubenswrapper[5001]: I0128 17:43:38.597245 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:43:46 crc kubenswrapper[5001]: I0128 17:43:46.501798 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:46 crc kubenswrapper[5001]: I0128 17:43:46.503356 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:46 crc kubenswrapper[5001]: I0128 17:43:46.504046 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:46 crc kubenswrapper[5001]: I0128 17:43:46.504120 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:43:47 crc kubenswrapper[5001]: I0128 17:43:47.519200 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:47 crc kubenswrapper[5001]: I0128 17:43:47.520352 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:47 crc kubenswrapper[5001]: I0128 17:43:47.520391 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:47 crc kubenswrapper[5001]: I0128 17:43:47.525174 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:48 crc kubenswrapper[5001]: I0128 17:43:48.324635 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:48 crc kubenswrapper[5001]: I0128 17:43:48.327561 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.068252 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.070197 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.077049 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.078470 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.093837 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.105612 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.208897 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9v4\" (UniqueName: \"kubernetes.io/projected/ffb8aec7-32c4-45d0-9337-95843a72a04b-kube-api-access-9t9v4\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.208941 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/633c525a-2821-41ad-9b18-96997a7f5a85-logs\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.208995 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb8aec7-32c4-45d0-9337-95843a72a04b-logs\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.209042 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb8aec7-32c4-45d0-9337-95843a72a04b-config-data\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.209125 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c525a-2821-41ad-9b18-96997a7f5a85-config-data\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.209177 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbjt\" (UniqueName: \"kubernetes.io/projected/633c525a-2821-41ad-9b18-96997a7f5a85-kube-api-access-shbjt\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311146 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c525a-2821-41ad-9b18-96997a7f5a85-config-data\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311212 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbjt\" (UniqueName: \"kubernetes.io/projected/633c525a-2821-41ad-9b18-96997a7f5a85-kube-api-access-shbjt\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311245 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9v4\" (UniqueName: \"kubernetes.io/projected/ffb8aec7-32c4-45d0-9337-95843a72a04b-kube-api-access-9t9v4\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311261 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/633c525a-2821-41ad-9b18-96997a7f5a85-logs\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311283 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb8aec7-32c4-45d0-9337-95843a72a04b-logs\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311316 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb8aec7-32c4-45d0-9337-95843a72a04b-config-data\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.311818 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb8aec7-32c4-45d0-9337-95843a72a04b-logs\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.312162 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/633c525a-2821-41ad-9b18-96997a7f5a85-logs\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.329108 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb8aec7-32c4-45d0-9337-95843a72a04b-config-data\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.329597 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c525a-2821-41ad-9b18-96997a7f5a85-config-data\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.333920 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbjt\" (UniqueName: \"kubernetes.io/projected/633c525a-2821-41ad-9b18-96997a7f5a85-kube-api-access-shbjt\") pod \"nova-kuttl-api-1\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.340434 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9v4\" (UniqueName: \"kubernetes.io/projected/ffb8aec7-32c4-45d0-9337-95843a72a04b-kube-api-access-9t9v4\") pod \"nova-kuttl-api-2\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.399970 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.413551 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.414663 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.417810 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.423531 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.425859 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.435399 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.463252 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.519404 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e586c1-822a-4de4-9c80-3022342e8215-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.522630 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dd8g\" (UniqueName: \"kubernetes.io/projected/98e586c1-822a-4de4-9c80-3022342e8215-kube-api-access-9dd8g\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.625521 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dd8g\" (UniqueName: \"kubernetes.io/projected/98e586c1-822a-4de4-9c80-3022342e8215-kube-api-access-9dd8g\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.625902 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rrk\" (UniqueName: \"kubernetes.io/projected/2b9aba33-5169-4acf-8b87-43d5053c97bd-kube-api-access-d5rrk\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.625922 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b9aba33-5169-4acf-8b87-43d5053c97bd-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.625996 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e586c1-822a-4de4-9c80-3022342e8215-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.636368 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e586c1-822a-4de4-9c80-3022342e8215-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.644794 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dd8g\" (UniqueName: \"kubernetes.io/projected/98e586c1-822a-4de4-9c80-3022342e8215-kube-api-access-9dd8g\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.727793 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rrk\" (UniqueName: \"kubernetes.io/projected/2b9aba33-5169-4acf-8b87-43d5053c97bd-kube-api-access-d5rrk\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.727837 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b9aba33-5169-4acf-8b87-43d5053c97bd-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.732531 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b9aba33-5169-4acf-8b87-43d5053c97bd-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.747631 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rrk\" (UniqueName: \"kubernetes.io/projected/2b9aba33-5169-4acf-8b87-43d5053c97bd-kube-api-access-d5rrk\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.826198 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.836218 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.899649 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 17:43:51 crc kubenswrapper[5001]: I0128 17:43:51.912141 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.357107 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"633c525a-2821-41ad-9b18-96997a7f5a85","Type":"ContainerStarted","Data":"7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5"} Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.357719 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"633c525a-2821-41ad-9b18-96997a7f5a85","Type":"ContainerStarted","Data":"b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6"} Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.357739 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"633c525a-2821-41ad-9b18-96997a7f5a85","Type":"ContainerStarted","Data":"680089215e078d59ea8c9ce8fe16dfe487d24d5aa8853fafc7043d01d1348a85"} Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.357821 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.359927 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"ffb8aec7-32c4-45d0-9337-95843a72a04b","Type":"ContainerStarted","Data":"d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4"} Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.359998 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"ffb8aec7-32c4-45d0-9337-95843a72a04b","Type":"ContainerStarted","Data":"082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127"} Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.360012 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"ffb8aec7-32c4-45d0-9337-95843a72a04b","Type":"ContainerStarted","Data":"462bfc13af6482665fa99f0f743a8345c919def7158d21519918af962bf15d25"} Jan 28 17:43:52 crc kubenswrapper[5001]: W0128 17:43:52.367412 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98e586c1_822a_4de4_9c80_3022342e8215.slice/crio-461453e302377a9c8397204d7221c9b38ae1b0a112f8f4f78a571e8c4fdee29d WatchSource:0}: Error finding container 461453e302377a9c8397204d7221c9b38ae1b0a112f8f4f78a571e8c4fdee29d: Status 404 returned error can't find the container with id 461453e302377a9c8397204d7221c9b38ae1b0a112f8f4f78a571e8c4fdee29d Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.379775 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-1" podStartSLOduration=1.379757682 podStartE2EDuration="1.379757682s" podCreationTimestamp="2026-01-28 17:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:52.378805364 +0000 UTC m=+1678.546593604" watchObservedRunningTime="2026-01-28 17:43:52.379757682 +0000 UTC m=+1678.547545912" Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.425387 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-2" podStartSLOduration=1.425372169 podStartE2EDuration="1.425372169s" podCreationTimestamp="2026-01-28 17:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:52.405039512 +0000 UTC m=+1678.572827742" watchObservedRunningTime="2026-01-28 17:43:52.425372169 +0000 UTC m=+1678.593160399" Jan 28 17:43:52 crc kubenswrapper[5001]: I0128 17:43:52.429770 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.368495 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"2b9aba33-5169-4acf-8b87-43d5053c97bd","Type":"ContainerStarted","Data":"3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d"} Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.368589 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"2b9aba33-5169-4acf-8b87-43d5053c97bd","Type":"ContainerStarted","Data":"04bc472a52ca41ac14944f499830a5c5db1bcdf7989ddb3d4455034beaf4d8ce"} Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.368614 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.370069 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"98e586c1-822a-4de4-9c80-3022342e8215","Type":"ContainerStarted","Data":"2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3"} Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.370112 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"98e586c1-822a-4de4-9c80-3022342e8215","Type":"ContainerStarted","Data":"461453e302377a9c8397204d7221c9b38ae1b0a112f8f4f78a571e8c4fdee29d"} Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.370296 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.387038 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podStartSLOduration=2.387022846 podStartE2EDuration="2.387022846s" podCreationTimestamp="2026-01-28 17:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:53.383148624 +0000 UTC m=+1679.550936864" watchObservedRunningTime="2026-01-28 17:43:53.387022846 +0000 UTC m=+1679.554811076" Jan 28 17:43:53 crc kubenswrapper[5001]: I0128 17:43:53.400202 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podStartSLOduration=2.400186496 podStartE2EDuration="2.400186496s" podCreationTimestamp="2026-01-28 17:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:43:53.397227651 +0000 UTC m=+1679.565015901" watchObservedRunningTime="2026-01-28 17:43:53.400186496 +0000 UTC m=+1679.567974726" Jan 28 17:43:57 crc kubenswrapper[5001]: I0128 17:43:57.889011 5001 scope.go:117] "RemoveContainer" containerID="1b1e8b7430e61de252e231e83a5ffbd87c6171217243cc041388da0f0a883191" Jan 28 17:43:57 crc kubenswrapper[5001]: I0128 17:43:57.917141 5001 scope.go:117] "RemoveContainer" containerID="596d436cdb262f5813b078f76c001d300849eae806fff2d5b3f61ddf6f5316e8" Jan 28 17:43:57 crc kubenswrapper[5001]: I0128 17:43:57.951074 5001 scope.go:117] "RemoveContainer" containerID="20342c4048c675e00fde67c78a0369fb621ed4f3ad71cb16eaab444c29cad6df" Jan 28 17:43:57 crc kubenswrapper[5001]: I0128 17:43:57.981328 5001 scope.go:117] "RemoveContainer" containerID="22bd3cdc21bf1f59ca3c11bab6677ce09a684e199f135aa1585e052450c70b29" Jan 28 17:43:58 crc kubenswrapper[5001]: I0128 17:43:58.017820 5001 scope.go:117] "RemoveContainer" containerID="9b8ec41a75ace235e1d0438625bd9867434b2ccdc062686c9ddd22df15f89830" Jan 28 17:43:58 crc kubenswrapper[5001]: I0128 17:43:58.043089 5001 scope.go:117] "RemoveContainer" containerID="41a458233c0c1529c5bf1853bc12336b1e4c8cf26d3cb3227daf91edd4d9d097" Jan 28 17:44:01 crc kubenswrapper[5001]: I0128 17:44:01.400825 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:01 crc kubenswrapper[5001]: I0128 17:44:01.401289 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:01 crc kubenswrapper[5001]: I0128 17:44:01.418904 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:01 crc kubenswrapper[5001]: I0128 17:44:01.418989 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:01 crc kubenswrapper[5001]: I0128 17:44:01.863610 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:44:01 crc kubenswrapper[5001]: I0128 17:44:01.884153 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:44:02 crc kubenswrapper[5001]: I0128 17:44:02.566217 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:02 crc kubenswrapper[5001]: I0128 17:44:02.566259 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:02 crc kubenswrapper[5001]: I0128 17:44:02.566219 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:02 crc kubenswrapper[5001]: I0128 17:44:02.566289 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.137733 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.139441 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.146384 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.147713 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.154852 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.165558 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.212493 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4crw\" (UniqueName: \"kubernetes.io/projected/59119035-ddbf-48f4-bb92-e08e747fbd7f-kube-api-access-c4crw\") pod \"nova-kuttl-scheduler-2\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.212616 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59119035-ddbf-48f4-bb92-e08e747fbd7f-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.212650 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bwh\" (UniqueName: \"kubernetes.io/projected/84acbe82-6092-44aa-83f5-19ef333f5733-kube-api-access-g5bwh\") pod \"nova-kuttl-scheduler-1\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.212713 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84acbe82-6092-44aa-83f5-19ef333f5733-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.302689 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.305264 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.313791 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84acbe82-6092-44aa-83f5-19ef333f5733-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.313858 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4crw\" (UniqueName: \"kubernetes.io/projected/59119035-ddbf-48f4-bb92-e08e747fbd7f-kube-api-access-c4crw\") pod \"nova-kuttl-scheduler-2\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.313919 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59119035-ddbf-48f4-bb92-e08e747fbd7f-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.313941 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bwh\" (UniqueName: \"kubernetes.io/projected/84acbe82-6092-44aa-83f5-19ef333f5733-kube-api-access-g5bwh\") pod \"nova-kuttl-scheduler-1\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.320073 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.322095 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.322437 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59119035-ddbf-48f4-bb92-e08e747fbd7f-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.325455 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84acbe82-6092-44aa-83f5-19ef333f5733-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.337475 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bwh\" (UniqueName: \"kubernetes.io/projected/84acbe82-6092-44aa-83f5-19ef333f5733-kube-api-access-g5bwh\") pod \"nova-kuttl-scheduler-1\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.346412 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4crw\" (UniqueName: \"kubernetes.io/projected/59119035-ddbf-48f4-bb92-e08e747fbd7f-kube-api-access-c4crw\") pod \"nova-kuttl-scheduler-2\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.346548 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.356348 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.415569 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfe2e1d-424b-43d6-a27a-f24d77d25797-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.415677 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkk4q\" (UniqueName: \"kubernetes.io/projected/2bfe2e1d-424b-43d6-a27a-f24d77d25797-kube-api-access-mkk4q\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.415725 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfe2e1d-424b-43d6-a27a-f24d77d25797-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.415773 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnz9b\" (UniqueName: \"kubernetes.io/projected/af18b22f-3443-4200-a5f2-84a5c3426623-kube-api-access-gnz9b\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.415903 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af18b22f-3443-4200-a5f2-84a5c3426623-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.415952 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af18b22f-3443-4200-a5f2-84a5c3426623-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.463809 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.492283 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.517969 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfe2e1d-424b-43d6-a27a-f24d77d25797-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.518054 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnz9b\" (UniqueName: \"kubernetes.io/projected/af18b22f-3443-4200-a5f2-84a5c3426623-kube-api-access-gnz9b\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.518121 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af18b22f-3443-4200-a5f2-84a5c3426623-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.518154 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af18b22f-3443-4200-a5f2-84a5c3426623-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.518230 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfe2e1d-424b-43d6-a27a-f24d77d25797-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.518270 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkk4q\" (UniqueName: \"kubernetes.io/projected/2bfe2e1d-424b-43d6-a27a-f24d77d25797-kube-api-access-mkk4q\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.518447 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfe2e1d-424b-43d6-a27a-f24d77d25797-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.519050 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af18b22f-3443-4200-a5f2-84a5c3426623-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.523703 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af18b22f-3443-4200-a5f2-84a5c3426623-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.523891 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfe2e1d-424b-43d6-a27a-f24d77d25797-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.538531 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnz9b\" (UniqueName: \"kubernetes.io/projected/af18b22f-3443-4200-a5f2-84a5c3426623-kube-api-access-gnz9b\") pod \"nova-kuttl-metadata-1\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.542802 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkk4q\" (UniqueName: \"kubernetes.io/projected/2bfe2e1d-424b-43d6-a27a-f24d77d25797-kube-api-access-mkk4q\") pod \"nova-kuttl-metadata-2\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.647711 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:03 crc kubenswrapper[5001]: I0128 17:44:03.710912 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.012714 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.173010 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.279390 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 17:44:04 crc kubenswrapper[5001]: W0128 17:44:04.279761 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf18b22f_3443_4200_a5f2_84a5c3426623.slice/crio-384c209da58038d9253a5aea8efac667f29d23df10d4b2ab7718481308c48682 WatchSource:0}: Error finding container 384c209da58038d9253a5aea8efac667f29d23df10d4b2ab7718481308c48682: Status 404 returned error can't find the container with id 384c209da58038d9253a5aea8efac667f29d23df10d4b2ab7718481308c48682 Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.336225 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 17:44:04 crc kubenswrapper[5001]: W0128 17:44:04.344077 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bfe2e1d_424b_43d6_a27a_f24d77d25797.slice/crio-af5d52b2462773043ae789f7adc3a3cc03124c1411a423e7ac65631a88ffb64c WatchSource:0}: Error finding container af5d52b2462773043ae789f7adc3a3cc03124c1411a423e7ac65631a88ffb64c: Status 404 returned error can't find the container with id af5d52b2462773043ae789f7adc3a3cc03124c1411a423e7ac65631a88ffb64c Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.398354 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.399735 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.421619 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.422745 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.458026 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.466253 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"2bfe2e1d-424b-43d6-a27a-f24d77d25797","Type":"ContainerStarted","Data":"af5d52b2462773043ae789f7adc3a3cc03124c1411a423e7ac65631a88ffb64c"} Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.467603 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"af18b22f-3443-4200-a5f2-84a5c3426623","Type":"ContainerStarted","Data":"384c209da58038d9253a5aea8efac667f29d23df10d4b2ab7718481308c48682"} Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.468162 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.472845 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"59119035-ddbf-48f4-bb92-e08e747fbd7f","Type":"ContainerStarted","Data":"debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534"} Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.472889 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"59119035-ddbf-48f4-bb92-e08e747fbd7f","Type":"ContainerStarted","Data":"2f657b584d02c6c0425f92ed9ca0b679516763d7a21184dc8c4c9f27fb496cfe"} Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.488905 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"84acbe82-6092-44aa-83f5-19ef333f5733","Type":"ContainerStarted","Data":"c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897"} Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.488941 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"84acbe82-6092-44aa-83f5-19ef333f5733","Type":"ContainerStarted","Data":"fce94cc090174624f16f62fbd324c2beb31676bcc564551fcee1ad63dbf2b325"} Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.507253 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podStartSLOduration=1.507223893 podStartE2EDuration="1.507223893s" podCreationTimestamp="2026-01-28 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:04.486746832 +0000 UTC m=+1690.654535072" watchObservedRunningTime="2026-01-28 17:44:04.507223893 +0000 UTC m=+1690.675012123" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.526223 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podStartSLOduration=1.5262021909999999 podStartE2EDuration="1.526202191s" podCreationTimestamp="2026-01-28 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:04.507373508 +0000 UTC m=+1690.675161738" watchObservedRunningTime="2026-01-28 17:44:04.526202191 +0000 UTC m=+1690.693990421" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.538159 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m22v\" (UniqueName: \"kubernetes.io/projected/af2dc302-b448-4a32-84de-319d261be0ee-kube-api-access-9m22v\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.538256 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9d4g\" (UniqueName: \"kubernetes.io/projected/32687b75-0c4b-45d8-b1ef-5927de16f581-kube-api-access-f9d4g\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.538295 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2dc302-b448-4a32-84de-319d261be0ee-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.538356 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32687b75-0c4b-45d8-b1ef-5927de16f581-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.640131 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m22v\" (UniqueName: \"kubernetes.io/projected/af2dc302-b448-4a32-84de-319d261be0ee-kube-api-access-9m22v\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.640242 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9d4g\" (UniqueName: \"kubernetes.io/projected/32687b75-0c4b-45d8-b1ef-5927de16f581-kube-api-access-f9d4g\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.640273 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2dc302-b448-4a32-84de-319d261be0ee-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.640361 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32687b75-0c4b-45d8-b1ef-5927de16f581-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.644888 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2dc302-b448-4a32-84de-319d261be0ee-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.644900 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32687b75-0c4b-45d8-b1ef-5927de16f581-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.659120 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9d4g\" (UniqueName: \"kubernetes.io/projected/32687b75-0c4b-45d8-b1ef-5927de16f581-kube-api-access-f9d4g\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.668280 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m22v\" (UniqueName: \"kubernetes.io/projected/af2dc302-b448-4a32-84de-319d261be0ee-kube-api-access-9m22v\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.796165 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.813809 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.835914 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.835988 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.836033 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.837014 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:44:04 crc kubenswrapper[5001]: I0128 17:44:04.837073 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" gracePeriod=600 Jan 28 17:44:05 crc kubenswrapper[5001]: E0128 17:44:05.009590 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.313079 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.437213 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.500882 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" exitCode=0 Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.500961 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.501012 5001 scope.go:117] "RemoveContainer" containerID="9c2ea3f31e70bc76378f33610638ba8a4614d235f2874eb9a110ed1d5f56e411" Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.501756 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:44:05 crc kubenswrapper[5001]: E0128 17:44:05.502080 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.515479 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"af2dc302-b448-4a32-84de-319d261be0ee","Type":"ContainerStarted","Data":"28c12958caefb2745327a265307b711c7f61d8b10207815b9efcb3842c63ea2e"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.526885 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"32687b75-0c4b-45d8-b1ef-5927de16f581","Type":"ContainerStarted","Data":"6ff0430123279743fe4d58c62226edfab78245334b83378fdafc5458bf177691"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.532236 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"2bfe2e1d-424b-43d6-a27a-f24d77d25797","Type":"ContainerStarted","Data":"f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.532275 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"2bfe2e1d-424b-43d6-a27a-f24d77d25797","Type":"ContainerStarted","Data":"f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.548554 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"af18b22f-3443-4200-a5f2-84a5c3426623","Type":"ContainerStarted","Data":"f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.548595 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"af18b22f-3443-4200-a5f2-84a5c3426623","Type":"ContainerStarted","Data":"c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487"} Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.562239 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-2" podStartSLOduration=2.562216997 podStartE2EDuration="2.562216997s" podCreationTimestamp="2026-01-28 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:05.549389266 +0000 UTC m=+1691.717177496" watchObservedRunningTime="2026-01-28 17:44:05.562216997 +0000 UTC m=+1691.730005217" Jan 28 17:44:05 crc kubenswrapper[5001]: I0128 17:44:05.601206 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-1" podStartSLOduration=2.6011924520000003 podStartE2EDuration="2.601192452s" podCreationTimestamp="2026-01-28 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:05.598771832 +0000 UTC m=+1691.766560062" watchObservedRunningTime="2026-01-28 17:44:05.601192452 +0000 UTC m=+1691.768980682" Jan 28 17:44:06 crc kubenswrapper[5001]: I0128 17:44:06.557059 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"32687b75-0c4b-45d8-b1ef-5927de16f581","Type":"ContainerStarted","Data":"e2255e29062c8290e306ed1c8db94c9a0e6c057e0969903bed27f7677359a6bf"} Jan 28 17:44:06 crc kubenswrapper[5001]: I0128 17:44:06.558462 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:06 crc kubenswrapper[5001]: I0128 17:44:06.563217 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"af2dc302-b448-4a32-84de-319d261be0ee","Type":"ContainerStarted","Data":"6f1c38389b3245b8f3a97f8d49cba5e6a053746d09a9ca5929080c72d2e243c5"} Jan 28 17:44:06 crc kubenswrapper[5001]: I0128 17:44:06.563907 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:06 crc kubenswrapper[5001]: I0128 17:44:06.580573 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podStartSLOduration=2.580552451 podStartE2EDuration="2.580552451s" podCreationTimestamp="2026-01-28 17:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:06.571416308 +0000 UTC m=+1692.739204538" watchObservedRunningTime="2026-01-28 17:44:06.580552451 +0000 UTC m=+1692.748340681" Jan 28 17:44:06 crc kubenswrapper[5001]: I0128 17:44:06.597027 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podStartSLOduration=2.597010257 podStartE2EDuration="2.597010257s" podCreationTimestamp="2026-01-28 17:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:06.594148964 +0000 UTC m=+1692.761937214" watchObservedRunningTime="2026-01-28 17:44:06.597010257 +0000 UTC m=+1692.764798487" Jan 28 17:44:08 crc kubenswrapper[5001]: I0128 17:44:08.464546 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:08 crc kubenswrapper[5001]: I0128 17:44:08.492988 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:08 crc kubenswrapper[5001]: I0128 17:44:08.648568 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:08 crc kubenswrapper[5001]: I0128 17:44:08.648938 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:08 crc kubenswrapper[5001]: I0128 17:44:08.711771 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:08 crc kubenswrapper[5001]: I0128 17:44:08.711834 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.405202 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.406364 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.406437 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.408783 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.423578 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.423925 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.426109 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.430347 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.604650 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.604693 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.607949 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:11 crc kubenswrapper[5001]: I0128 17:44:11.609244 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.464954 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.493447 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.496443 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.525778 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.644785 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.648018 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.648064 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.654637 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.711868 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:13 crc kubenswrapper[5001]: I0128 17:44:13.712244 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:14 crc kubenswrapper[5001]: I0128 17:44:14.730200 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:14 crc kubenswrapper[5001]: I0128 17:44:14.730234 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:14 crc kubenswrapper[5001]: I0128 17:44:14.813141 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:14 crc kubenswrapper[5001]: I0128 17:44:14.814173 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:44:14 crc kubenswrapper[5001]: I0128 17:44:14.827669 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:14 crc kubenswrapper[5001]: I0128 17:44:14.846034 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:16 crc kubenswrapper[5001]: I0128 17:44:16.598091 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:44:16 crc kubenswrapper[5001]: E0128 17:44:16.598435 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.650344 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.653072 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.654205 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.702035 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.714957 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.715073 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.716687 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:23 crc kubenswrapper[5001]: I0128 17:44:23.717849 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:24 crc kubenswrapper[5001]: I0128 17:44:24.937006 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 17:44:24 crc kubenswrapper[5001]: I0128 17:44:24.937551 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-log" containerID="cri-o://082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127" gracePeriod=30 Jan 28 17:44:24 crc kubenswrapper[5001]: I0128 17:44:24.938082 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-api" containerID="cri-o://d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4" gracePeriod=30 Jan 28 17:44:24 crc kubenswrapper[5001]: I0128 17:44:24.946518 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 17:44:24 crc kubenswrapper[5001]: I0128 17:44:24.946747 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-log" containerID="cri-o://b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6" gracePeriod=30 Jan 28 17:44:24 crc kubenswrapper[5001]: I0128 17:44:24.946843 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-api" containerID="cri-o://7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5" gracePeriod=30 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.405695 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.405894 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="98e586c1-822a-4de4-9c80-3022342e8215" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" gracePeriod=30 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.421017 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.421402 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="2b9aba33-5169-4acf-8b87-43d5053c97bd" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d" gracePeriod=30 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.515002 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.515276 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="32687b75-0c4b-45d8-b1ef-5927de16f581" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://e2255e29062c8290e306ed1c8db94c9a0e6c057e0969903bed27f7677359a6bf" gracePeriod=30 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.522406 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.522711 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="af2dc302-b448-4a32-84de-319d261be0ee" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://6f1c38389b3245b8f3a97f8d49cba5e6a053746d09a9ca5929080c72d2e243c5" gracePeriod=30 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.714576 5001 generic.go:334] "Generic (PLEG): container finished" podID="633c525a-2821-41ad-9b18-96997a7f5a85" containerID="b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6" exitCode=143 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.714656 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"633c525a-2821-41ad-9b18-96997a7f5a85","Type":"ContainerDied","Data":"b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6"} Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.716489 5001 generic.go:334] "Generic (PLEG): container finished" podID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerID="082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127" exitCode=143 Jan 28 17:44:25 crc kubenswrapper[5001]: I0128 17:44:25.717224 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"ffb8aec7-32c4-45d0-9337-95843a72a04b","Type":"ContainerDied","Data":"082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127"} Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.828611 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.831111 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.832476 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.832537 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="98e586c1-822a-4de4-9c80-3022342e8215" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.838865 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.840216 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.841209 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:26 crc kubenswrapper[5001]: E0128 17:44:26.841324 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="2b9aba33-5169-4acf-8b87-43d5053c97bd" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:27 crc kubenswrapper[5001]: I0128 17:44:27.732052 5001 generic.go:334] "Generic (PLEG): container finished" podID="af2dc302-b448-4a32-84de-319d261be0ee" containerID="6f1c38389b3245b8f3a97f8d49cba5e6a053746d09a9ca5929080c72d2e243c5" exitCode=0 Jan 28 17:44:27 crc kubenswrapper[5001]: I0128 17:44:27.732136 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"af2dc302-b448-4a32-84de-319d261be0ee","Type":"ContainerDied","Data":"6f1c38389b3245b8f3a97f8d49cba5e6a053746d09a9ca5929080c72d2e243c5"} Jan 28 17:44:27 crc kubenswrapper[5001]: I0128 17:44:27.733898 5001 generic.go:334] "Generic (PLEG): container finished" podID="32687b75-0c4b-45d8-b1ef-5927de16f581" containerID="e2255e29062c8290e306ed1c8db94c9a0e6c057e0969903bed27f7677359a6bf" exitCode=0 Jan 28 17:44:27 crc kubenswrapper[5001]: I0128 17:44:27.733924 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"32687b75-0c4b-45d8-b1ef-5927de16f581","Type":"ContainerDied","Data":"e2255e29062c8290e306ed1c8db94c9a0e6c057e0969903bed27f7677359a6bf"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.097087 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.101955 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.280510 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2dc302-b448-4a32-84de-319d261be0ee-config-data\") pod \"af2dc302-b448-4a32-84de-319d261be0ee\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.280939 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9d4g\" (UniqueName: \"kubernetes.io/projected/32687b75-0c4b-45d8-b1ef-5927de16f581-kube-api-access-f9d4g\") pod \"32687b75-0c4b-45d8-b1ef-5927de16f581\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.281221 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m22v\" (UniqueName: \"kubernetes.io/projected/af2dc302-b448-4a32-84de-319d261be0ee-kube-api-access-9m22v\") pod \"af2dc302-b448-4a32-84de-319d261be0ee\" (UID: \"af2dc302-b448-4a32-84de-319d261be0ee\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.281272 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32687b75-0c4b-45d8-b1ef-5927de16f581-config-data\") pod \"32687b75-0c4b-45d8-b1ef-5927de16f581\" (UID: \"32687b75-0c4b-45d8-b1ef-5927de16f581\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.286770 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32687b75-0c4b-45d8-b1ef-5927de16f581-kube-api-access-f9d4g" (OuterVolumeSpecName: "kube-api-access-f9d4g") pod "32687b75-0c4b-45d8-b1ef-5927de16f581" (UID: "32687b75-0c4b-45d8-b1ef-5927de16f581"). InnerVolumeSpecName "kube-api-access-f9d4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.291691 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2dc302-b448-4a32-84de-319d261be0ee-kube-api-access-9m22v" (OuterVolumeSpecName: "kube-api-access-9m22v") pod "af2dc302-b448-4a32-84de-319d261be0ee" (UID: "af2dc302-b448-4a32-84de-319d261be0ee"). InnerVolumeSpecName "kube-api-access-9m22v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.322647 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2dc302-b448-4a32-84de-319d261be0ee-config-data" (OuterVolumeSpecName: "config-data") pod "af2dc302-b448-4a32-84de-319d261be0ee" (UID: "af2dc302-b448-4a32-84de-319d261be0ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.324620 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32687b75-0c4b-45d8-b1ef-5927de16f581-config-data" (OuterVolumeSpecName: "config-data") pod "32687b75-0c4b-45d8-b1ef-5927de16f581" (UID: "32687b75-0c4b-45d8-b1ef-5927de16f581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.386317 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9d4g\" (UniqueName: \"kubernetes.io/projected/32687b75-0c4b-45d8-b1ef-5927de16f581-kube-api-access-f9d4g\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.386368 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m22v\" (UniqueName: \"kubernetes.io/projected/af2dc302-b448-4a32-84de-319d261be0ee-kube-api-access-9m22v\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.386385 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32687b75-0c4b-45d8-b1ef-5927de16f581-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.386406 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af2dc302-b448-4a32-84de-319d261be0ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.497275 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.558610 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.694088 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shbjt\" (UniqueName: \"kubernetes.io/projected/633c525a-2821-41ad-9b18-96997a7f5a85-kube-api-access-shbjt\") pod \"633c525a-2821-41ad-9b18-96997a7f5a85\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.694256 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb8aec7-32c4-45d0-9337-95843a72a04b-config-data\") pod \"ffb8aec7-32c4-45d0-9337-95843a72a04b\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.694596 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c525a-2821-41ad-9b18-96997a7f5a85-config-data\") pod \"633c525a-2821-41ad-9b18-96997a7f5a85\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.694693 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/633c525a-2821-41ad-9b18-96997a7f5a85-logs\") pod \"633c525a-2821-41ad-9b18-96997a7f5a85\" (UID: \"633c525a-2821-41ad-9b18-96997a7f5a85\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.694752 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t9v4\" (UniqueName: \"kubernetes.io/projected/ffb8aec7-32c4-45d0-9337-95843a72a04b-kube-api-access-9t9v4\") pod \"ffb8aec7-32c4-45d0-9337-95843a72a04b\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.694808 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb8aec7-32c4-45d0-9337-95843a72a04b-logs\") pod \"ffb8aec7-32c4-45d0-9337-95843a72a04b\" (UID: \"ffb8aec7-32c4-45d0-9337-95843a72a04b\") " Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.695336 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633c525a-2821-41ad-9b18-96997a7f5a85-logs" (OuterVolumeSpecName: "logs") pod "633c525a-2821-41ad-9b18-96997a7f5a85" (UID: "633c525a-2821-41ad-9b18-96997a7f5a85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.695381 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb8aec7-32c4-45d0-9337-95843a72a04b-logs" (OuterVolumeSpecName: "logs") pod "ffb8aec7-32c4-45d0-9337-95843a72a04b" (UID: "ffb8aec7-32c4-45d0-9337-95843a72a04b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.697501 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/633c525a-2821-41ad-9b18-96997a7f5a85-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.697522 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb8aec7-32c4-45d0-9337-95843a72a04b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.698400 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633c525a-2821-41ad-9b18-96997a7f5a85-kube-api-access-shbjt" (OuterVolumeSpecName: "kube-api-access-shbjt") pod "633c525a-2821-41ad-9b18-96997a7f5a85" (UID: "633c525a-2821-41ad-9b18-96997a7f5a85"). InnerVolumeSpecName "kube-api-access-shbjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.699384 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb8aec7-32c4-45d0-9337-95843a72a04b-kube-api-access-9t9v4" (OuterVolumeSpecName: "kube-api-access-9t9v4") pod "ffb8aec7-32c4-45d0-9337-95843a72a04b" (UID: "ffb8aec7-32c4-45d0-9337-95843a72a04b"). InnerVolumeSpecName "kube-api-access-9t9v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.717212 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb8aec7-32c4-45d0-9337-95843a72a04b-config-data" (OuterVolumeSpecName: "config-data") pod "ffb8aec7-32c4-45d0-9337-95843a72a04b" (UID: "ffb8aec7-32c4-45d0-9337-95843a72a04b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.721624 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633c525a-2821-41ad-9b18-96997a7f5a85-config-data" (OuterVolumeSpecName: "config-data") pod "633c525a-2821-41ad-9b18-96997a7f5a85" (UID: "633c525a-2821-41ad-9b18-96997a7f5a85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.744927 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"32687b75-0c4b-45d8-b1ef-5927de16f581","Type":"ContainerDied","Data":"6ff0430123279743fe4d58c62226edfab78245334b83378fdafc5458bf177691"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.744999 5001 scope.go:117] "RemoveContainer" containerID="e2255e29062c8290e306ed1c8db94c9a0e6c057e0969903bed27f7677359a6bf" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.745018 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.748841 5001 generic.go:334] "Generic (PLEG): container finished" podID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerID="d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4" exitCode=0 Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.748927 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.750857 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"ffb8aec7-32c4-45d0-9337-95843a72a04b","Type":"ContainerDied","Data":"d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.750913 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"ffb8aec7-32c4-45d0-9337-95843a72a04b","Type":"ContainerDied","Data":"462bfc13af6482665fa99f0f743a8345c919def7158d21519918af962bf15d25"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.756096 5001 generic.go:334] "Generic (PLEG): container finished" podID="633c525a-2821-41ad-9b18-96997a7f5a85" containerID="7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5" exitCode=0 Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.756122 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"633c525a-2821-41ad-9b18-96997a7f5a85","Type":"ContainerDied","Data":"7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.756150 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"633c525a-2821-41ad-9b18-96997a7f5a85","Type":"ContainerDied","Data":"680089215e078d59ea8c9ce8fe16dfe487d24d5aa8853fafc7043d01d1348a85"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.756169 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.759810 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"af2dc302-b448-4a32-84de-319d261be0ee","Type":"ContainerDied","Data":"28c12958caefb2745327a265307b711c7f61d8b10207815b9efcb3842c63ea2e"} Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.759857 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.791453 5001 scope.go:117] "RemoveContainer" containerID="d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.799288 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.823624 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.834178 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633c525a-2821-41ad-9b18-96997a7f5a85-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.834227 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t9v4\" (UniqueName: \"kubernetes.io/projected/ffb8aec7-32c4-45d0-9337-95843a72a04b-kube-api-access-9t9v4\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.834243 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shbjt\" (UniqueName: \"kubernetes.io/projected/633c525a-2821-41ad-9b18-96997a7f5a85-kube-api-access-shbjt\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.834257 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb8aec7-32c4-45d0-9337-95843a72a04b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.846658 5001 scope.go:117] "RemoveContainer" containerID="082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.852067 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.863582 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.871850 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.871966 5001 scope.go:117] "RemoveContainer" containerID="d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4" Jan 28 17:44:28 crc kubenswrapper[5001]: E0128 17:44:28.872725 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4\": container with ID starting with d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4 not found: ID does not exist" containerID="d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.872777 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4"} err="failed to get container status \"d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4\": rpc error: code = NotFound desc = could not find container \"d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4\": container with ID starting with d1edc0e4f075c1be4c71f2b44b9dfcaaf8e37b507cc4178a48b42ad9707b64d4 not found: ID does not exist" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.872810 5001 scope.go:117] "RemoveContainer" containerID="082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127" Jan 28 17:44:28 crc kubenswrapper[5001]: E0128 17:44:28.873251 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127\": container with ID starting with 082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127 not found: ID does not exist" containerID="082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.873282 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127"} err="failed to get container status \"082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127\": rpc error: code = NotFound desc = could not find container \"082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127\": container with ID starting with 082ef95dd2591df502fd9dce816f9beff945f6d7a6ad8b7d2d79a7ca51aab127 not found: ID does not exist" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.873301 5001 scope.go:117] "RemoveContainer" containerID="7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.879024 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.885757 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.892728 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.895589 5001 scope.go:117] "RemoveContainer" containerID="b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.914510 5001 scope.go:117] "RemoveContainer" containerID="7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5" Jan 28 17:44:28 crc kubenswrapper[5001]: E0128 17:44:28.916350 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5\": container with ID starting with 7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5 not found: ID does not exist" containerID="7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.916419 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5"} err="failed to get container status \"7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5\": rpc error: code = NotFound desc = could not find container \"7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5\": container with ID starting with 7c870bd0c4271bb951b7fa9434d937ded00747b0b31eb280912021bc5d98f7c5 not found: ID does not exist" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.916456 5001 scope.go:117] "RemoveContainer" containerID="b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6" Jan 28 17:44:28 crc kubenswrapper[5001]: E0128 17:44:28.919443 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6\": container with ID starting with b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6 not found: ID does not exist" containerID="b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.919519 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6"} err="failed to get container status \"b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6\": rpc error: code = NotFound desc = could not find container \"b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6\": container with ID starting with b1f9fc5a745ecc39d7112be76283bc267aa6874aff8f79151adb61bd9df1f0f6 not found: ID does not exist" Jan 28 17:44:28 crc kubenswrapper[5001]: I0128 17:44:28.919542 5001 scope.go:117] "RemoveContainer" containerID="6f1c38389b3245b8f3a97f8d49cba5e6a053746d09a9ca5929080c72d2e243c5" Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.594238 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:44:30 crc kubenswrapper[5001]: E0128 17:44:30.595298 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.604903 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32687b75-0c4b-45d8-b1ef-5927de16f581" path="/var/lib/kubelet/pods/32687b75-0c4b-45d8-b1ef-5927de16f581/volumes" Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.605757 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" path="/var/lib/kubelet/pods/633c525a-2821-41ad-9b18-96997a7f5a85/volumes" Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.606351 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af2dc302-b448-4a32-84de-319d261be0ee" path="/var/lib/kubelet/pods/af2dc302-b448-4a32-84de-319d261be0ee/volumes" Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.606902 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" path="/var/lib/kubelet/pods/ffb8aec7-32c4-45d0-9337-95843a72a04b/volumes" Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.787443 5001 generic.go:334] "Generic (PLEG): container finished" podID="2b9aba33-5169-4acf-8b87-43d5053c97bd" containerID="3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d" exitCode=0 Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.787487 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"2b9aba33-5169-4acf-8b87-43d5053c97bd","Type":"ContainerDied","Data":"3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d"} Jan 28 17:44:30 crc kubenswrapper[5001]: I0128 17:44:30.979900 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.061881 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-jvk5f"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.072793 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b9aba33-5169-4acf-8b87-43d5053c97bd-config-data\") pod \"2b9aba33-5169-4acf-8b87-43d5053c97bd\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.072834 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-jvk5f"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.072997 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5rrk\" (UniqueName: \"kubernetes.io/projected/2b9aba33-5169-4acf-8b87-43d5053c97bd-kube-api-access-d5rrk\") pod \"2b9aba33-5169-4acf-8b87-43d5053c97bd\" (UID: \"2b9aba33-5169-4acf-8b87-43d5053c97bd\") " Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.081453 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9aba33-5169-4acf-8b87-43d5053c97bd-kube-api-access-d5rrk" (OuterVolumeSpecName: "kube-api-access-d5rrk") pod "2b9aba33-5169-4acf-8b87-43d5053c97bd" (UID: "2b9aba33-5169-4acf-8b87-43d5053c97bd"). InnerVolumeSpecName "kube-api-access-d5rrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.111824 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9aba33-5169-4acf-8b87-43d5053c97bd-config-data" (OuterVolumeSpecName: "config-data") pod "2b9aba33-5169-4acf-8b87-43d5053c97bd" (UID: "2b9aba33-5169-4acf-8b87-43d5053c97bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.140206 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.175075 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5rrk\" (UniqueName: \"kubernetes.io/projected/2b9aba33-5169-4acf-8b87-43d5053c97bd-kube-api-access-d5rrk\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.176372 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b9aba33-5169-4acf-8b87-43d5053c97bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.277253 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e586c1-822a-4de4-9c80-3022342e8215-config-data\") pod \"98e586c1-822a-4de4-9c80-3022342e8215\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.277988 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dd8g\" (UniqueName: \"kubernetes.io/projected/98e586c1-822a-4de4-9c80-3022342e8215-kube-api-access-9dd8g\") pod \"98e586c1-822a-4de4-9c80-3022342e8215\" (UID: \"98e586c1-822a-4de4-9c80-3022342e8215\") " Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.284101 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e586c1-822a-4de4-9c80-3022342e8215-kube-api-access-9dd8g" (OuterVolumeSpecName: "kube-api-access-9dd8g") pod "98e586c1-822a-4de4-9c80-3022342e8215" (UID: "98e586c1-822a-4de4-9c80-3022342e8215"). InnerVolumeSpecName "kube-api-access-9dd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.302259 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e586c1-822a-4de4-9c80-3022342e8215-config-data" (OuterVolumeSpecName: "config-data") pod "98e586c1-822a-4de4-9c80-3022342e8215" (UID: "98e586c1-822a-4de4-9c80-3022342e8215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.379894 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e586c1-822a-4de4-9c80-3022342e8215-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.379928 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dd8g\" (UniqueName: \"kubernetes.io/projected/98e586c1-822a-4de4-9c80-3022342e8215-kube-api-access-9dd8g\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.645807 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.646086 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="59119035-ddbf-48f4-bb92-e08e747fbd7f" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" gracePeriod=30 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.660688 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.660955 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="84acbe82-6092-44aa-83f5-19ef333f5733" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" gracePeriod=30 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.784073 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.784442 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-log" containerID="cri-o://f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0" gracePeriod=30 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.785083 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f" gracePeriod=30 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.806819 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.814341 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-log" containerID="cri-o://c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487" gracePeriod=30 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.814872 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b" gracePeriod=30 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.828273 5001 generic.go:334] "Generic (PLEG): container finished" podID="98e586c1-822a-4de4-9c80-3022342e8215" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" exitCode=0 Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.828399 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"98e586c1-822a-4de4-9c80-3022342e8215","Type":"ContainerDied","Data":"2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3"} Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.828431 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"98e586c1-822a-4de4-9c80-3022342e8215","Type":"ContainerDied","Data":"461453e302377a9c8397204d7221c9b38ae1b0a112f8f4f78a571e8c4fdee29d"} Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.828430 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.828452 5001 scope.go:117] "RemoveContainer" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.836517 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"2b9aba33-5169-4acf-8b87-43d5053c97bd","Type":"ContainerDied","Data":"04bc472a52ca41ac14944f499830a5c5db1bcdf7989ddb3d4455034beaf4d8ce"} Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.836623 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.870275 5001 scope.go:117] "RemoveContainer" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" Jan 28 17:44:31 crc kubenswrapper[5001]: E0128 17:44:31.872485 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3\": container with ID starting with 2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3 not found: ID does not exist" containerID="2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.872543 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3"} err="failed to get container status \"2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3\": rpc error: code = NotFound desc = could not find container \"2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3\": container with ID starting with 2971951f680808c4b22425a7b40bb5370498257ac7c3ebbe96865358735fcec3 not found: ID does not exist" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.872576 5001 scope.go:117] "RemoveContainer" containerID="3defe13160ecf6c0cdf9d317f5e920897eee4dffdb0b0f87e05d972e8e79ea3d" Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.882314 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.899958 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.911046 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 17:44:31 crc kubenswrapper[5001]: I0128 17:44:31.921221 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.607206 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9aba33-5169-4acf-8b87-43d5053c97bd" path="/var/lib/kubelet/pods/2b9aba33-5169-4acf-8b87-43d5053c97bd/volumes" Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.608417 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be66815-6cf5-429b-8bd0-95eb7e898655" path="/var/lib/kubelet/pods/5be66815-6cf5-429b-8bd0-95eb7e898655/volumes" Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.609124 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e586c1-822a-4de4-9c80-3022342e8215" path="/var/lib/kubelet/pods/98e586c1-822a-4de4-9c80-3022342e8215/volumes" Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.849821 5001 generic.go:334] "Generic (PLEG): container finished" podID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerID="f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0" exitCode=143 Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.849868 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"2bfe2e1d-424b-43d6-a27a-f24d77d25797","Type":"ContainerDied","Data":"f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0"} Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.855253 5001 generic.go:334] "Generic (PLEG): container finished" podID="af18b22f-3443-4200-a5f2-84a5c3426623" containerID="c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487" exitCode=143 Jan 28 17:44:32 crc kubenswrapper[5001]: I0128 17:44:32.855307 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"af18b22f-3443-4200-a5f2-84a5c3426623","Type":"ContainerDied","Data":"c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487"} Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.466562 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.467820 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.469739 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.469773 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="84acbe82-6092-44aa-83f5-19ef333f5733" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.494060 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.495443 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.496554 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:44:33 crc kubenswrapper[5001]: E0128 17:44:33.496589 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="59119035-ddbf-48f4-bb92-e08e747fbd7f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.031646 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-6z2n5"] Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.040171 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-6z2n5"] Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.615782 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca854e6e-32b7-42d4-86b0-148253804265" path="/var/lib/kubelet/pods/ca854e6e-32b7-42d4-86b0-148253804265/volumes" Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.916223 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": read tcp 10.217.0.2:55102->10.217.0.174:8775: read: connection reset by peer" Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.916604 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": read tcp 10.217.0.2:55094->10.217.0.174:8775: read: connection reset by peer" Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.941132 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": read tcp 10.217.0.2:51612->10.217.0.173:8775: read: connection reset by peer" Jan 28 17:44:34 crc kubenswrapper[5001]: I0128 17:44:34.941216 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": read tcp 10.217.0.2:51616->10.217.0.173:8775: read: connection reset by peer" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.031912 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-6341-account-create-update-zhnmv"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.038707 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-d461-account-create-update-zsz4n"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.049230 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-6341-account-create-update-zhnmv"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.060009 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-d461-account-create-update-zsz4n"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.453626 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.457833 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.558757 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnz9b\" (UniqueName: \"kubernetes.io/projected/af18b22f-3443-4200-a5f2-84a5c3426623-kube-api-access-gnz9b\") pod \"af18b22f-3443-4200-a5f2-84a5c3426623\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.559166 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkk4q\" (UniqueName: \"kubernetes.io/projected/2bfe2e1d-424b-43d6-a27a-f24d77d25797-kube-api-access-mkk4q\") pod \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.559203 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfe2e1d-424b-43d6-a27a-f24d77d25797-config-data\") pod \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.559303 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfe2e1d-424b-43d6-a27a-f24d77d25797-logs\") pod \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\" (UID: \"2bfe2e1d-424b-43d6-a27a-f24d77d25797\") " Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.559374 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af18b22f-3443-4200-a5f2-84a5c3426623-logs\") pod \"af18b22f-3443-4200-a5f2-84a5c3426623\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.559398 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af18b22f-3443-4200-a5f2-84a5c3426623-config-data\") pod \"af18b22f-3443-4200-a5f2-84a5c3426623\" (UID: \"af18b22f-3443-4200-a5f2-84a5c3426623\") " Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.560509 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bfe2e1d-424b-43d6-a27a-f24d77d25797-logs" (OuterVolumeSpecName: "logs") pod "2bfe2e1d-424b-43d6-a27a-f24d77d25797" (UID: "2bfe2e1d-424b-43d6-a27a-f24d77d25797"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.560517 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af18b22f-3443-4200-a5f2-84a5c3426623-logs" (OuterVolumeSpecName: "logs") pod "af18b22f-3443-4200-a5f2-84a5c3426623" (UID: "af18b22f-3443-4200-a5f2-84a5c3426623"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.564644 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af18b22f-3443-4200-a5f2-84a5c3426623-kube-api-access-gnz9b" (OuterVolumeSpecName: "kube-api-access-gnz9b") pod "af18b22f-3443-4200-a5f2-84a5c3426623" (UID: "af18b22f-3443-4200-a5f2-84a5c3426623"). InnerVolumeSpecName "kube-api-access-gnz9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.573245 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bfe2e1d-424b-43d6-a27a-f24d77d25797-kube-api-access-mkk4q" (OuterVolumeSpecName: "kube-api-access-mkk4q") pod "2bfe2e1d-424b-43d6-a27a-f24d77d25797" (UID: "2bfe2e1d-424b-43d6-a27a-f24d77d25797"). InnerVolumeSpecName "kube-api-access-mkk4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.589190 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af18b22f-3443-4200-a5f2-84a5c3426623-config-data" (OuterVolumeSpecName: "config-data") pod "af18b22f-3443-4200-a5f2-84a5c3426623" (UID: "af18b22f-3443-4200-a5f2-84a5c3426623"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.589391 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfe2e1d-424b-43d6-a27a-f24d77d25797-config-data" (OuterVolumeSpecName: "config-data") pod "2bfe2e1d-424b-43d6-a27a-f24d77d25797" (UID: "2bfe2e1d-424b-43d6-a27a-f24d77d25797"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.661870 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bfe2e1d-424b-43d6-a27a-f24d77d25797-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.662778 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bfe2e1d-424b-43d6-a27a-f24d77d25797-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.662803 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af18b22f-3443-4200-a5f2-84a5c3426623-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.662814 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af18b22f-3443-4200-a5f2-84a5c3426623-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.662825 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnz9b\" (UniqueName: \"kubernetes.io/projected/af18b22f-3443-4200-a5f2-84a5c3426623-kube-api-access-gnz9b\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.662839 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkk4q\" (UniqueName: \"kubernetes.io/projected/2bfe2e1d-424b-43d6-a27a-f24d77d25797-kube-api-access-mkk4q\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.880775 5001 generic.go:334] "Generic (PLEG): container finished" podID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerID="f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f" exitCode=0 Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.880854 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"2bfe2e1d-424b-43d6-a27a-f24d77d25797","Type":"ContainerDied","Data":"f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f"} Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.880885 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"2bfe2e1d-424b-43d6-a27a-f24d77d25797","Type":"ContainerDied","Data":"af5d52b2462773043ae789f7adc3a3cc03124c1411a423e7ac65631a88ffb64c"} Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.880875 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.880907 5001 scope.go:117] "RemoveContainer" containerID="f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.883465 5001 generic.go:334] "Generic (PLEG): container finished" podID="af18b22f-3443-4200-a5f2-84a5c3426623" containerID="f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b" exitCode=0 Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.883513 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"af18b22f-3443-4200-a5f2-84a5c3426623","Type":"ContainerDied","Data":"f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b"} Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.883535 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.883539 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"af18b22f-3443-4200-a5f2-84a5c3426623","Type":"ContainerDied","Data":"384c209da58038d9253a5aea8efac667f29d23df10d4b2ab7718481308c48682"} Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.904092 5001 scope.go:117] "RemoveContainer" containerID="f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.918082 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.928505 5001 scope.go:117] "RemoveContainer" containerID="f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.928747 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 17:44:35 crc kubenswrapper[5001]: E0128 17:44:35.932097 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f\": container with ID starting with f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f not found: ID does not exist" containerID="f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.932140 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f"} err="failed to get container status \"f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f\": rpc error: code = NotFound desc = could not find container \"f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f\": container with ID starting with f91e37cd5fe6bd0751701c05b1c25cc6b252d8c5dbe6661abc6fc7b677b9898f not found: ID does not exist" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.932171 5001 scope.go:117] "RemoveContainer" containerID="f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0" Jan 28 17:44:35 crc kubenswrapper[5001]: E0128 17:44:35.940034 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0\": container with ID starting with f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0 not found: ID does not exist" containerID="f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.940068 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0"} err="failed to get container status \"f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0\": rpc error: code = NotFound desc = could not find container \"f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0\": container with ID starting with f0c06e3acbe88dd49c0e45aceb5af60edf98e4077840dae5f7d99b8ff8c40cf0 not found: ID does not exist" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.940091 5001 scope.go:117] "RemoveContainer" containerID="f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b" Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.969046 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.974082 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 17:44:35 crc kubenswrapper[5001]: I0128 17:44:35.974141 5001 scope.go:117] "RemoveContainer" containerID="c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.050158 5001 scope.go:117] "RemoveContainer" containerID="f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b" Jan 28 17:44:36 crc kubenswrapper[5001]: E0128 17:44:36.064471 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b\": container with ID starting with f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b not found: ID does not exist" containerID="f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.064516 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b"} err="failed to get container status \"f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b\": rpc error: code = NotFound desc = could not find container \"f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b\": container with ID starting with f863a73b7264145e4923b274e58376a9c8820ec7a10f27e5fda25f3966dd735b not found: ID does not exist" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.064548 5001 scope.go:117] "RemoveContainer" containerID="c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487" Jan 28 17:44:36 crc kubenswrapper[5001]: E0128 17:44:36.068444 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487\": container with ID starting with c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487 not found: ID does not exist" containerID="c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.068479 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487"} err="failed to get container status \"c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487\": rpc error: code = NotFound desc = could not find container \"c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487\": container with ID starting with c7aa164b7fa393403089a2d4eac9d81f9d6fac73876e3663ed0221170b7ff487 not found: ID does not exist" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.606262 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" path="/var/lib/kubelet/pods/2bfe2e1d-424b-43d6-a27a-f24d77d25797/volumes" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.607461 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac3c059-fbe0-479b-8abc-cb0018604e0f" path="/var/lib/kubelet/pods/5ac3c059-fbe0-479b-8abc-cb0018604e0f/volumes" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.608570 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" path="/var/lib/kubelet/pods/af18b22f-3443-4200-a5f2-84a5c3426623/volumes" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.609596 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb719788-2aef-4b17-9706-0bd463e0ebe7" path="/var/lib/kubelet/pods/fb719788-2aef-4b17-9706-0bd463e0ebe7/volumes" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.679349 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.786636 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84acbe82-6092-44aa-83f5-19ef333f5733-config-data\") pod \"84acbe82-6092-44aa-83f5-19ef333f5733\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.786840 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5bwh\" (UniqueName: \"kubernetes.io/projected/84acbe82-6092-44aa-83f5-19ef333f5733-kube-api-access-g5bwh\") pod \"84acbe82-6092-44aa-83f5-19ef333f5733\" (UID: \"84acbe82-6092-44aa-83f5-19ef333f5733\") " Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.794233 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84acbe82-6092-44aa-83f5-19ef333f5733-kube-api-access-g5bwh" (OuterVolumeSpecName: "kube-api-access-g5bwh") pod "84acbe82-6092-44aa-83f5-19ef333f5733" (UID: "84acbe82-6092-44aa-83f5-19ef333f5733"). InnerVolumeSpecName "kube-api-access-g5bwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.811657 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84acbe82-6092-44aa-83f5-19ef333f5733-config-data" (OuterVolumeSpecName: "config-data") pod "84acbe82-6092-44aa-83f5-19ef333f5733" (UID: "84acbe82-6092-44aa-83f5-19ef333f5733"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.877087 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.888915 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5bwh\" (UniqueName: \"kubernetes.io/projected/84acbe82-6092-44aa-83f5-19ef333f5733-kube-api-access-g5bwh\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.888952 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84acbe82-6092-44aa-83f5-19ef333f5733-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.894453 5001 generic.go:334] "Generic (PLEG): container finished" podID="59119035-ddbf-48f4-bb92-e08e747fbd7f" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" exitCode=0 Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.894500 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"59119035-ddbf-48f4-bb92-e08e747fbd7f","Type":"ContainerDied","Data":"debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534"} Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.894523 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"59119035-ddbf-48f4-bb92-e08e747fbd7f","Type":"ContainerDied","Data":"2f657b584d02c6c0425f92ed9ca0b679516763d7a21184dc8c4c9f27fb496cfe"} Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.894539 5001 scope.go:117] "RemoveContainer" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.894609 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.899788 5001 generic.go:334] "Generic (PLEG): container finished" podID="84acbe82-6092-44aa-83f5-19ef333f5733" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" exitCode=0 Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.899861 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"84acbe82-6092-44aa-83f5-19ef333f5733","Type":"ContainerDied","Data":"c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897"} Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.899892 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"84acbe82-6092-44aa-83f5-19ef333f5733","Type":"ContainerDied","Data":"fce94cc090174624f16f62fbd324c2beb31676bcc564551fcee1ad63dbf2b325"} Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.899946 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.917851 5001 scope.go:117] "RemoveContainer" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" Jan 28 17:44:36 crc kubenswrapper[5001]: E0128 17:44:36.918352 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534\": container with ID starting with debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534 not found: ID does not exist" containerID="debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.918401 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534"} err="failed to get container status \"debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534\": rpc error: code = NotFound desc = could not find container \"debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534\": container with ID starting with debef5cebc0f676becc64bc87db0b1c4137866ff958254db8e7840cdf6ecd534 not found: ID does not exist" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.918433 5001 scope.go:117] "RemoveContainer" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.941702 5001 scope.go:117] "RemoveContainer" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" Jan 28 17:44:36 crc kubenswrapper[5001]: E0128 17:44:36.942250 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897\": container with ID starting with c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897 not found: ID does not exist" containerID="c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.942288 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897"} err="failed to get container status \"c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897\": rpc error: code = NotFound desc = could not find container \"c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897\": container with ID starting with c2486a7ea97c3e05c47bc5feb01436bfa707a7d90ac7a9a09fb07bee2b8ef897 not found: ID does not exist" Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.944436 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.953048 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.991830 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59119035-ddbf-48f4-bb92-e08e747fbd7f-config-data\") pod \"59119035-ddbf-48f4-bb92-e08e747fbd7f\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.991968 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4crw\" (UniqueName: \"kubernetes.io/projected/59119035-ddbf-48f4-bb92-e08e747fbd7f-kube-api-access-c4crw\") pod \"59119035-ddbf-48f4-bb92-e08e747fbd7f\" (UID: \"59119035-ddbf-48f4-bb92-e08e747fbd7f\") " Jan 28 17:44:36 crc kubenswrapper[5001]: I0128 17:44:36.996656 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59119035-ddbf-48f4-bb92-e08e747fbd7f-kube-api-access-c4crw" (OuterVolumeSpecName: "kube-api-access-c4crw") pod "59119035-ddbf-48f4-bb92-e08e747fbd7f" (UID: "59119035-ddbf-48f4-bb92-e08e747fbd7f"). InnerVolumeSpecName "kube-api-access-c4crw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:37 crc kubenswrapper[5001]: I0128 17:44:37.011820 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59119035-ddbf-48f4-bb92-e08e747fbd7f-config-data" (OuterVolumeSpecName: "config-data") pod "59119035-ddbf-48f4-bb92-e08e747fbd7f" (UID: "59119035-ddbf-48f4-bb92-e08e747fbd7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:37 crc kubenswrapper[5001]: I0128 17:44:37.093998 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59119035-ddbf-48f4-bb92-e08e747fbd7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:37 crc kubenswrapper[5001]: I0128 17:44:37.094035 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4crw\" (UniqueName: \"kubernetes.io/projected/59119035-ddbf-48f4-bb92-e08e747fbd7f-kube-api-access-c4crw\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:37 crc kubenswrapper[5001]: I0128 17:44:37.228968 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 17:44:37 crc kubenswrapper[5001]: I0128 17:44:37.235579 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 17:44:38 crc kubenswrapper[5001]: I0128 17:44:38.609196 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59119035-ddbf-48f4-bb92-e08e747fbd7f" path="/var/lib/kubelet/pods/59119035-ddbf-48f4-bb92-e08e747fbd7f/volumes" Jan 28 17:44:38 crc kubenswrapper[5001]: I0128 17:44:38.609996 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84acbe82-6092-44aa-83f5-19ef333f5733" path="/var/lib/kubelet/pods/84acbe82-6092-44aa-83f5-19ef333f5733/volumes" Jan 28 17:44:41 crc kubenswrapper[5001]: I0128 17:44:41.038655 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-rwnnn"] Jan 28 17:44:41 crc kubenswrapper[5001]: I0128 17:44:41.044371 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-rwnnn"] Jan 28 17:44:42 crc kubenswrapper[5001]: I0128 17:44:42.604059 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0548880a-31f0-4d7d-9eb8-5b402a8cc67a" path="/var/lib/kubelet/pods/0548880a-31f0-4d7d-9eb8-5b402a8cc67a/volumes" Jan 28 17:44:44 crc kubenswrapper[5001]: I0128 17:44:44.598618 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:44:44 crc kubenswrapper[5001]: E0128 17:44:44.599117 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:44:44 crc kubenswrapper[5001]: I0128 17:44:44.794216 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:44:44 crc kubenswrapper[5001]: I0128 17:44:44.794523 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-log" containerID="cri-o://31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b" gracePeriod=30 Jan 28 17:44:44 crc kubenswrapper[5001]: I0128 17:44:44.794640 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-api" containerID="cri-o://76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2" gracePeriod=30 Jan 28 17:44:44 crc kubenswrapper[5001]: I0128 17:44:44.977493 5001 generic.go:334] "Generic (PLEG): container finished" podID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerID="31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b" exitCode=143 Jan 28 17:44:44 crc kubenswrapper[5001]: I0128 17:44:44.977585 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40","Type":"ContainerDied","Data":"31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b"} Jan 28 17:44:45 crc kubenswrapper[5001]: I0128 17:44:45.147969 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:44:45 crc kubenswrapper[5001]: I0128 17:44:45.148188 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="b231e9f8-fe36-43e3-978a-bf5d8059f9b6" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://ffc227f3f6bd49fe3ac26f67027840cb1e71231938aa17b87dae9a60082f6299" gracePeriod=30 Jan 28 17:44:46 crc kubenswrapper[5001]: I0128 17:44:46.997284 5001 generic.go:334] "Generic (PLEG): container finished" podID="b231e9f8-fe36-43e3-978a-bf5d8059f9b6" containerID="ffc227f3f6bd49fe3ac26f67027840cb1e71231938aa17b87dae9a60082f6299" exitCode=0 Jan 28 17:44:46 crc kubenswrapper[5001]: I0128 17:44:46.997683 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"b231e9f8-fe36-43e3-978a-bf5d8059f9b6","Type":"ContainerDied","Data":"ffc227f3f6bd49fe3ac26f67027840cb1e71231938aa17b87dae9a60082f6299"} Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.120249 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.253634 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5crtg\" (UniqueName: \"kubernetes.io/projected/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-kube-api-access-5crtg\") pod \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.253717 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-config-data\") pod \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\" (UID: \"b231e9f8-fe36-43e3-978a-bf5d8059f9b6\") " Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.258609 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-kube-api-access-5crtg" (OuterVolumeSpecName: "kube-api-access-5crtg") pod "b231e9f8-fe36-43e3-978a-bf5d8059f9b6" (UID: "b231e9f8-fe36-43e3-978a-bf5d8059f9b6"). InnerVolumeSpecName "kube-api-access-5crtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.285869 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-config-data" (OuterVolumeSpecName: "config-data") pod "b231e9f8-fe36-43e3-978a-bf5d8059f9b6" (UID: "b231e9f8-fe36-43e3-978a-bf5d8059f9b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.355665 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5crtg\" (UniqueName: \"kubernetes.io/projected/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-kube-api-access-5crtg\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:47 crc kubenswrapper[5001]: I0128 17:44:47.355707 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b231e9f8-fe36-43e3-978a-bf5d8059f9b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.009719 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"b231e9f8-fe36-43e3-978a-bf5d8059f9b6","Type":"ContainerDied","Data":"1801023c937f1689623c3612675eec87b1fc8570454273bc4073ffb9a117b08a"} Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.010098 5001 scope.go:117] "RemoveContainer" containerID="ffc227f3f6bd49fe3ac26f67027840cb1e71231938aa17b87dae9a60082f6299" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.009820 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.080847 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.089620 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.108042 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.108238 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14" gracePeriod=30 Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.147497 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.147715 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-log" containerID="cri-o://fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963" gracePeriod=30 Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.149431 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f" gracePeriod=30 Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.339563 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.471186 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-logs\") pod \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.471324 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-config-data\") pod \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.471505 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbm4c\" (UniqueName: \"kubernetes.io/projected/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-kube-api-access-rbm4c\") pod \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\" (UID: \"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40\") " Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.471837 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-logs" (OuterVolumeSpecName: "logs") pod "63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" (UID: "63b0d119-bac1-436b-8cf2-bfa3a9a4bf40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.474970 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-kube-api-access-rbm4c" (OuterVolumeSpecName: "kube-api-access-rbm4c") pod "63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" (UID: "63b0d119-bac1-436b-8cf2-bfa3a9a4bf40"). InnerVolumeSpecName "kube-api-access-rbm4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.491606 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-config-data" (OuterVolumeSpecName: "config-data") pod "63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" (UID: "63b0d119-bac1-436b-8cf2-bfa3a9a4bf40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.573801 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbm4c\" (UniqueName: \"kubernetes.io/projected/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-kube-api-access-rbm4c\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.573835 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.573847 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:48 crc kubenswrapper[5001]: I0128 17:44:48.602436 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b231e9f8-fe36-43e3-978a-bf5d8059f9b6" path="/var/lib/kubelet/pods/b231e9f8-fe36-43e3-978a-bf5d8059f9b6/volumes" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.019596 5001 generic.go:334] "Generic (PLEG): container finished" podID="4b238e8f-2244-480f-86f8-a5262f531e04" containerID="fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963" exitCode=143 Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.019673 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4b238e8f-2244-480f-86f8-a5262f531e04","Type":"ContainerDied","Data":"fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963"} Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.022638 5001 generic.go:334] "Generic (PLEG): container finished" podID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerID="76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2" exitCode=0 Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.022679 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.022692 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40","Type":"ContainerDied","Data":"76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2"} Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.022725 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63b0d119-bac1-436b-8cf2-bfa3a9a4bf40","Type":"ContainerDied","Data":"a37e465841ea78abc331b3ce031ecd0b2b4a9e9d384c952318f97308d136e4ad"} Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.022746 5001 scope.go:117] "RemoveContainer" containerID="76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.045380 5001 scope.go:117] "RemoveContainer" containerID="31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.046911 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.059293 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.067565 5001 scope.go:117] "RemoveContainer" containerID="76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.068104 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2\": container with ID starting with 76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2 not found: ID does not exist" containerID="76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.068163 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2"} err="failed to get container status \"76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2\": rpc error: code = NotFound desc = could not find container \"76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2\": container with ID starting with 76a13fc09d096f38f3fc0a417def22f792beda4cb97d780bdb7c1e58ba2e74a2 not found: ID does not exist" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.068197 5001 scope.go:117] "RemoveContainer" containerID="31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.068546 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b\": container with ID starting with 31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b not found: ID does not exist" containerID="31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.068587 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b"} err="failed to get container status \"31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b\": rpc error: code = NotFound desc = could not find container \"31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b\": container with ID starting with 31d028ba7a0960fc77f304e653e5ae4dec27fc01e383ddda6f2b7b5c77b9cb6b not found: ID does not exist" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.431483 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.431695 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="e2987317-e3b0-4dce-89ca-cab188e4098e" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3" gracePeriod=30 Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.784207 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.790951 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-827fw"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.803216 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.813247 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-mphcd"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.919730 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell18e0b-account-delete-bjtg5"] Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920155 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59119035-ddbf-48f4-bb92-e08e747fbd7f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920175 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="59119035-ddbf-48f4-bb92-e08e747fbd7f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920188 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84acbe82-6092-44aa-83f5-19ef333f5733" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920194 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="84acbe82-6092-44aa-83f5-19ef333f5733" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920207 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920213 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920224 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920230 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920241 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920247 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920258 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920264 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920273 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9aba33-5169-4acf-8b87-43d5053c97bd" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920280 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9aba33-5169-4acf-8b87-43d5053c97bd" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920290 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920300 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920309 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b231e9f8-fe36-43e3-978a-bf5d8059f9b6" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920317 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b231e9f8-fe36-43e3-978a-bf5d8059f9b6" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920326 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920331 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-log" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920343 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920353 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920368 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920376 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920400 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2dc302-b448-4a32-84de-319d261be0ee" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920426 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2dc302-b448-4a32-84de-319d261be0ee" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920436 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920443 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-log" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920459 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32687b75-0c4b-45d8-b1ef-5927de16f581" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920466 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="32687b75-0c4b-45d8-b1ef-5927de16f581" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920481 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e586c1-822a-4de4-9c80-3022342e8215" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920489 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e586c1-822a-4de4-9c80-3022342e8215" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: E0128 17:44:49.920500 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920507 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920704 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920717 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9aba33-5169-4acf-8b87-43d5053c97bd" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920724 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920734 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e586c1-822a-4de4-9c80-3022342e8215" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920747 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2dc302-b448-4a32-84de-319d261be0ee" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920755 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb8aec7-32c4-45d0-9337-95843a72a04b" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920763 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920774 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920783 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfe2e1d-424b-43d6-a27a-f24d77d25797" containerName="nova-kuttl-metadata-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920794 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" containerName="nova-kuttl-api-api" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920805 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="84acbe82-6092-44aa-83f5-19ef333f5733" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920816 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="59119035-ddbf-48f4-bb92-e08e747fbd7f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920824 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="633c525a-2821-41ad-9b18-96997a7f5a85" containerName="nova-kuttl-api-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920834 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920845 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="af18b22f-3443-4200-a5f2-84a5c3426623" containerName="nova-kuttl-metadata-log" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920855 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b231e9f8-fe36-43e3-978a-bf5d8059f9b6" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.920867 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="32687b75-0c4b-45d8-b1ef-5927de16f581" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.921570 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.937998 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell18e0b-account-delete-bjtg5"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.986058 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi4401-account-delete-tvcgk"] Jan 28 17:44:49 crc kubenswrapper[5001]: I0128 17:44:49.987353 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.027507 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi4401-account-delete-tvcgk"] Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.059261 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell05d7a-account-delete-ldg9g"] Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.060369 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.078537 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell05d7a-account-delete-ldg9g"] Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.090954 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.091269 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="0354446b-b372-4934-bf4e-43ecd798ca5c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f" gracePeriod=30 Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.097152 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjst\" (UniqueName: \"kubernetes.io/projected/58b7f48b-43e2-432a-89a5-734b082b1b84-kube-api-access-rbjst\") pod \"novaapi4401-account-delete-tvcgk\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.097209 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97dhl\" (UniqueName: \"kubernetes.io/projected/41838942-06bc-4442-b961-191b29830fa1-kube-api-access-97dhl\") pod \"novacell18e0b-account-delete-bjtg5\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.097328 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58b7f48b-43e2-432a-89a5-734b082b1b84-operator-scripts\") pod \"novaapi4401-account-delete-tvcgk\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.097632 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41838942-06bc-4442-b961-191b29830fa1-operator-scripts\") pod \"novacell18e0b-account-delete-bjtg5\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.199683 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/5254d426-9927-4afe-aaf4-fc4dcb433a2e-kube-api-access-8vv2f\") pod \"novacell05d7a-account-delete-ldg9g\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.199748 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjst\" (UniqueName: \"kubernetes.io/projected/58b7f48b-43e2-432a-89a5-734b082b1b84-kube-api-access-rbjst\") pod \"novaapi4401-account-delete-tvcgk\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.199793 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97dhl\" (UniqueName: \"kubernetes.io/projected/41838942-06bc-4442-b961-191b29830fa1-kube-api-access-97dhl\") pod \"novacell18e0b-account-delete-bjtg5\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.199875 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58b7f48b-43e2-432a-89a5-734b082b1b84-operator-scripts\") pod \"novaapi4401-account-delete-tvcgk\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.199911 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5254d426-9927-4afe-aaf4-fc4dcb433a2e-operator-scripts\") pod \"novacell05d7a-account-delete-ldg9g\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.199962 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41838942-06bc-4442-b961-191b29830fa1-operator-scripts\") pod \"novacell18e0b-account-delete-bjtg5\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.201101 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58b7f48b-43e2-432a-89a5-734b082b1b84-operator-scripts\") pod \"novaapi4401-account-delete-tvcgk\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.201254 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41838942-06bc-4442-b961-191b29830fa1-operator-scripts\") pod \"novacell18e0b-account-delete-bjtg5\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.233879 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjst\" (UniqueName: \"kubernetes.io/projected/58b7f48b-43e2-432a-89a5-734b082b1b84-kube-api-access-rbjst\") pod \"novaapi4401-account-delete-tvcgk\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.233924 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97dhl\" (UniqueName: \"kubernetes.io/projected/41838942-06bc-4442-b961-191b29830fa1-kube-api-access-97dhl\") pod \"novacell18e0b-account-delete-bjtg5\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.258393 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.301346 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/5254d426-9927-4afe-aaf4-fc4dcb433a2e-kube-api-access-8vv2f\") pod \"novacell05d7a-account-delete-ldg9g\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.301470 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5254d426-9927-4afe-aaf4-fc4dcb433a2e-operator-scripts\") pod \"novacell05d7a-account-delete-ldg9g\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.302354 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5254d426-9927-4afe-aaf4-fc4dcb433a2e-operator-scripts\") pod \"novacell05d7a-account-delete-ldg9g\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.323605 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/5254d426-9927-4afe-aaf4-fc4dcb433a2e-kube-api-access-8vv2f\") pod \"novacell05d7a-account-delete-ldg9g\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.333380 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.392646 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.606779 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d31e229-52c5-4f77-a446-4d54bc3a75af" path="/var/lib/kubelet/pods/1d31e229-52c5-4f77-a446-4d54bc3a75af/volumes" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.608326 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366" path="/var/lib/kubelet/pods/23f7fb2d-75a0-4fd2-bf89-fcfa45bd0366/volumes" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.609161 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b0d119-bac1-436b-8cf2-bfa3a9a4bf40" path="/var/lib/kubelet/pods/63b0d119-bac1-436b-8cf2-bfa3a9a4bf40/volumes" Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.761870 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell18e0b-account-delete-bjtg5"] Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.865410 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi4401-account-delete-tvcgk"] Jan 28 17:44:50 crc kubenswrapper[5001]: W0128 17:44:50.870915 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58b7f48b_43e2_432a_89a5_734b082b1b84.slice/crio-3f1e7f9c986bd1aa78457f388e2050aa7e8c4c0c0e6f02c358bb4bd079e8e9ee WatchSource:0}: Error finding container 3f1e7f9c986bd1aa78457f388e2050aa7e8c4c0c0e6f02c358bb4bd079e8e9ee: Status 404 returned error can't find the container with id 3f1e7f9c986bd1aa78457f388e2050aa7e8c4c0c0e6f02c358bb4bd079e8e9ee Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.922873 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell05d7a-account-delete-ldg9g"] Jan 28 17:44:50 crc kubenswrapper[5001]: I0128 17:44:50.939408 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.094894 5001 generic.go:334] "Generic (PLEG): container finished" podID="0354446b-b372-4934-bf4e-43ecd798ca5c" containerID="589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f" exitCode=0 Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.095073 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"0354446b-b372-4934-bf4e-43ecd798ca5c","Type":"ContainerDied","Data":"589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.095119 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"0354446b-b372-4934-bf4e-43ecd798ca5c","Type":"ContainerDied","Data":"f35c3cf58878e86cda9cbeb754ea5c4d48161e21c8d702a7091fa59c47c80cdd"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.095093 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.095144 5001 scope.go:117] "RemoveContainer" containerID="589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.106731 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" event={"ID":"5254d426-9927-4afe-aaf4-fc4dcb433a2e","Type":"ContainerStarted","Data":"c6abaaffd4b1a9b2261026291140d182d285151ef7a1a17f196cd8438e9013da"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.106785 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" event={"ID":"5254d426-9927-4afe-aaf4-fc4dcb433a2e","Type":"ContainerStarted","Data":"9b8699a7333575a04882737ec1a47dd7aab2b87e509e489552447b25da29a3c6"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.111516 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" event={"ID":"58b7f48b-43e2-432a-89a5-734b082b1b84","Type":"ContainerStarted","Data":"a7dbc54393a470edf356a8f42c5679939bff8a013029e11c49a7d179d30c8ae9"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.111553 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" event={"ID":"58b7f48b-43e2-432a-89a5-734b082b1b84","Type":"ContainerStarted","Data":"3f1e7f9c986bd1aa78457f388e2050aa7e8c4c0c0e6f02c358bb4bd079e8e9ee"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.114151 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" event={"ID":"41838942-06bc-4442-b961-191b29830fa1","Type":"ContainerStarted","Data":"45fe20b17251dd7a5a54d6e5900d4f03a3b73cf76ff3e67ee16d0ba45d046d1c"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.114178 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" event={"ID":"41838942-06bc-4442-b961-191b29830fa1","Type":"ContainerStarted","Data":"b1d49b2b402ce5ab2e025b2942d806df045639564b2b9ab56fcb0734360cc46d"} Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.117554 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0354446b-b372-4934-bf4e-43ecd798ca5c-config-data\") pod \"0354446b-b372-4934-bf4e-43ecd798ca5c\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.117635 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8jrt\" (UniqueName: \"kubernetes.io/projected/0354446b-b372-4934-bf4e-43ecd798ca5c-kube-api-access-t8jrt\") pod \"0354446b-b372-4934-bf4e-43ecd798ca5c\" (UID: \"0354446b-b372-4934-bf4e-43ecd798ca5c\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.130166 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0354446b-b372-4934-bf4e-43ecd798ca5c-kube-api-access-t8jrt" (OuterVolumeSpecName: "kube-api-access-t8jrt") pod "0354446b-b372-4934-bf4e-43ecd798ca5c" (UID: "0354446b-b372-4934-bf4e-43ecd798ca5c"). InnerVolumeSpecName "kube-api-access-t8jrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.132853 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" podStartSLOduration=1.132836651 podStartE2EDuration="1.132836651s" podCreationTimestamp="2026-01-28 17:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:51.125162209 +0000 UTC m=+1737.292950439" watchObservedRunningTime="2026-01-28 17:44:51.132836651 +0000 UTC m=+1737.300624881" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.147880 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0354446b-b372-4934-bf4e-43ecd798ca5c-config-data" (OuterVolumeSpecName: "config-data") pod "0354446b-b372-4934-bf4e-43ecd798ca5c" (UID: "0354446b-b372-4934-bf4e-43ecd798ca5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.203412 5001 scope.go:117] "RemoveContainer" containerID="589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f" Jan 28 17:44:51 crc kubenswrapper[5001]: E0128 17:44:51.204406 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f\": container with ID starting with 589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f not found: ID does not exist" containerID="589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.204452 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f"} err="failed to get container status \"589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f\": rpc error: code = NotFound desc = could not find container \"589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f\": container with ID starting with 589f8bf64649dfe7db837167d6f02399ce0f161dac43ba8d3b347d19c07d781f not found: ID does not exist" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.219601 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0354446b-b372-4934-bf4e-43ecd798ca5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.219652 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8jrt\" (UniqueName: \"kubernetes.io/projected/0354446b-b372-4934-bf4e-43ecd798ca5c-kube-api-access-t8jrt\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:51 crc kubenswrapper[5001]: E0128 17:44:51.246066 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:51 crc kubenswrapper[5001]: E0128 17:44:51.247843 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:51 crc kubenswrapper[5001]: E0128 17:44:51.249568 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:44:51 crc kubenswrapper[5001]: E0128 17:44:51.249639 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="e2987317-e3b0-4dce-89ca-cab188e4098e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.516023 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.165:8775/\": dial tcp 10.217.0.165:8775: connect: connection refused" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.516193 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.165:8775/\": dial tcp 10.217.0.165:8775: connect: connection refused" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.573323 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" podStartSLOduration=2.573303159 podStartE2EDuration="2.573303159s" podCreationTimestamp="2026-01-28 17:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:51.155201396 +0000 UTC m=+1737.322989626" watchObservedRunningTime="2026-01-28 17:44:51.573303159 +0000 UTC m=+1737.741091389" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.582119 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.585827 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.608428 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.731333 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-config-data\") pod \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.731439 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwpb6\" (UniqueName: \"kubernetes.io/projected/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-kube-api-access-qwpb6\") pod \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\" (UID: \"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.737026 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-kube-api-access-qwpb6" (OuterVolumeSpecName: "kube-api-access-qwpb6") pod "faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" (UID: "faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d"). InnerVolumeSpecName "kube-api-access-qwpb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.762088 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-config-data" (OuterVolumeSpecName: "config-data") pod "faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" (UID: "faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.806915 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.832869 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwpb6\" (UniqueName: \"kubernetes.io/projected/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-kube-api-access-qwpb6\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.832897 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.934431 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b238e8f-2244-480f-86f8-a5262f531e04-logs\") pod \"4b238e8f-2244-480f-86f8-a5262f531e04\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.934509 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdwq\" (UniqueName: \"kubernetes.io/projected/4b238e8f-2244-480f-86f8-a5262f531e04-kube-api-access-bmdwq\") pod \"4b238e8f-2244-480f-86f8-a5262f531e04\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.934632 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b238e8f-2244-480f-86f8-a5262f531e04-config-data\") pod \"4b238e8f-2244-480f-86f8-a5262f531e04\" (UID: \"4b238e8f-2244-480f-86f8-a5262f531e04\") " Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.935996 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b238e8f-2244-480f-86f8-a5262f531e04-logs" (OuterVolumeSpecName: "logs") pod "4b238e8f-2244-480f-86f8-a5262f531e04" (UID: "4b238e8f-2244-480f-86f8-a5262f531e04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.938725 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b238e8f-2244-480f-86f8-a5262f531e04-kube-api-access-bmdwq" (OuterVolumeSpecName: "kube-api-access-bmdwq") pod "4b238e8f-2244-480f-86f8-a5262f531e04" (UID: "4b238e8f-2244-480f-86f8-a5262f531e04"). InnerVolumeSpecName "kube-api-access-bmdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:51 crc kubenswrapper[5001]: I0128 17:44:51.954176 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b238e8f-2244-480f-86f8-a5262f531e04-config-data" (OuterVolumeSpecName: "config-data") pod "4b238e8f-2244-480f-86f8-a5262f531e04" (UID: "4b238e8f-2244-480f-86f8-a5262f531e04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.036276 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b238e8f-2244-480f-86f8-a5262f531e04-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.036312 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b238e8f-2244-480f-86f8-a5262f531e04-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.036325 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdwq\" (UniqueName: \"kubernetes.io/projected/4b238e8f-2244-480f-86f8-a5262f531e04-kube-api-access-bmdwq\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.133320 5001 generic.go:334] "Generic (PLEG): container finished" podID="5254d426-9927-4afe-aaf4-fc4dcb433a2e" containerID="c6abaaffd4b1a9b2261026291140d182d285151ef7a1a17f196cd8438e9013da" exitCode=0 Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.133386 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" event={"ID":"5254d426-9927-4afe-aaf4-fc4dcb433a2e","Type":"ContainerDied","Data":"c6abaaffd4b1a9b2261026291140d182d285151ef7a1a17f196cd8438e9013da"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.136256 5001 generic.go:334] "Generic (PLEG): container finished" podID="faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" containerID="0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14" exitCode=0 Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.136341 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d","Type":"ContainerDied","Data":"0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.136368 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d","Type":"ContainerDied","Data":"7178cfb413c46dfabea2690958d37e1e8c3e4af7b88652b6849b986da321ffa2"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.136384 5001 scope.go:117] "RemoveContainer" containerID="0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.136455 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.138427 5001 generic.go:334] "Generic (PLEG): container finished" podID="4b238e8f-2244-480f-86f8-a5262f531e04" containerID="95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f" exitCode=0 Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.138469 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.138516 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4b238e8f-2244-480f-86f8-a5262f531e04","Type":"ContainerDied","Data":"95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.138548 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4b238e8f-2244-480f-86f8-a5262f531e04","Type":"ContainerDied","Data":"99ba66614dda6812cc447c1572f7cbbd62b6d0f4c67cfe01ff7994486e911be2"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.143111 5001 generic.go:334] "Generic (PLEG): container finished" podID="58b7f48b-43e2-432a-89a5-734b082b1b84" containerID="a7dbc54393a470edf356a8f42c5679939bff8a013029e11c49a7d179d30c8ae9" exitCode=0 Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.143181 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" event={"ID":"58b7f48b-43e2-432a-89a5-734b082b1b84","Type":"ContainerDied","Data":"a7dbc54393a470edf356a8f42c5679939bff8a013029e11c49a7d179d30c8ae9"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.147797 5001 generic.go:334] "Generic (PLEG): container finished" podID="41838942-06bc-4442-b961-191b29830fa1" containerID="45fe20b17251dd7a5a54d6e5900d4f03a3b73cf76ff3e67ee16d0ba45d046d1c" exitCode=0 Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.148027 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" event={"ID":"41838942-06bc-4442-b961-191b29830fa1","Type":"ContainerDied","Data":"45fe20b17251dd7a5a54d6e5900d4f03a3b73cf76ff3e67ee16d0ba45d046d1c"} Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.166579 5001 scope.go:117] "RemoveContainer" containerID="0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14" Jan 28 17:44:52 crc kubenswrapper[5001]: E0128 17:44:52.171085 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14\": container with ID starting with 0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14 not found: ID does not exist" containerID="0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.171131 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14"} err="failed to get container status \"0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14\": rpc error: code = NotFound desc = could not find container \"0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14\": container with ID starting with 0021995f81adc902e6f4ba9fbceb53d6941ec352410d2406a29af3672d88db14 not found: ID does not exist" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.171156 5001 scope.go:117] "RemoveContainer" containerID="95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.185680 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.197139 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.198467 5001 scope.go:117] "RemoveContainer" containerID="fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.214064 5001 scope.go:117] "RemoveContainer" containerID="95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f" Jan 28 17:44:52 crc kubenswrapper[5001]: E0128 17:44:52.214614 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f\": container with ID starting with 95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f not found: ID does not exist" containerID="95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.214648 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f"} err="failed to get container status \"95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f\": rpc error: code = NotFound desc = could not find container \"95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f\": container with ID starting with 95d19e47bbb890f6e7896469d047302751df30722ff1fbed6fba0ed8087b284f not found: ID does not exist" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.214676 5001 scope.go:117] "RemoveContainer" containerID="fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963" Jan 28 17:44:52 crc kubenswrapper[5001]: E0128 17:44:52.215747 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963\": container with ID starting with fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963 not found: ID does not exist" containerID="fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.215774 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963"} err="failed to get container status \"fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963\": rpc error: code = NotFound desc = could not find container \"fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963\": container with ID starting with fcdf213028b075a040197616c4fd2ac1fdbd46b77879177e0fadcc8401a0b963 not found: ID does not exist" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.224987 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.233702 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.451760 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.544776 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97dhl\" (UniqueName: \"kubernetes.io/projected/41838942-06bc-4442-b961-191b29830fa1-kube-api-access-97dhl\") pod \"41838942-06bc-4442-b961-191b29830fa1\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.545343 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41838942-06bc-4442-b961-191b29830fa1-operator-scripts\") pod \"41838942-06bc-4442-b961-191b29830fa1\" (UID: \"41838942-06bc-4442-b961-191b29830fa1\") " Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.545847 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41838942-06bc-4442-b961-191b29830fa1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41838942-06bc-4442-b961-191b29830fa1" (UID: "41838942-06bc-4442-b961-191b29830fa1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.546045 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41838942-06bc-4442-b961-191b29830fa1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.549332 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41838942-06bc-4442-b961-191b29830fa1-kube-api-access-97dhl" (OuterVolumeSpecName: "kube-api-access-97dhl") pod "41838942-06bc-4442-b961-191b29830fa1" (UID: "41838942-06bc-4442-b961-191b29830fa1"). InnerVolumeSpecName "kube-api-access-97dhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.603655 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0354446b-b372-4934-bf4e-43ecd798ca5c" path="/var/lib/kubelet/pods/0354446b-b372-4934-bf4e-43ecd798ca5c/volumes" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.604576 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" path="/var/lib/kubelet/pods/4b238e8f-2244-480f-86f8-a5262f531e04/volumes" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.605446 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" path="/var/lib/kubelet/pods/faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d/volumes" Jan 28 17:44:52 crc kubenswrapper[5001]: I0128 17:44:52.647499 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97dhl\" (UniqueName: \"kubernetes.io/projected/41838942-06bc-4442-b961-191b29830fa1-kube-api-access-97dhl\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.159334 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" event={"ID":"41838942-06bc-4442-b961-191b29830fa1","Type":"ContainerDied","Data":"b1d49b2b402ce5ab2e025b2942d806df045639564b2b9ab56fcb0734360cc46d"} Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.159389 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d49b2b402ce5ab2e025b2942d806df045639564b2b9ab56fcb0734360cc46d" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.159394 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell18e0b-account-delete-bjtg5" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.477122 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.536469 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.665886 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58b7f48b-43e2-432a-89a5-734b082b1b84-operator-scripts\") pod \"58b7f48b-43e2-432a-89a5-734b082b1b84\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.666240 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbjst\" (UniqueName: \"kubernetes.io/projected/58b7f48b-43e2-432a-89a5-734b082b1b84-kube-api-access-rbjst\") pod \"58b7f48b-43e2-432a-89a5-734b082b1b84\" (UID: \"58b7f48b-43e2-432a-89a5-734b082b1b84\") " Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.666380 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/5254d426-9927-4afe-aaf4-fc4dcb433a2e-kube-api-access-8vv2f\") pod \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.666418 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b7f48b-43e2-432a-89a5-734b082b1b84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58b7f48b-43e2-432a-89a5-734b082b1b84" (UID: "58b7f48b-43e2-432a-89a5-734b082b1b84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.666457 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5254d426-9927-4afe-aaf4-fc4dcb433a2e-operator-scripts\") pod \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\" (UID: \"5254d426-9927-4afe-aaf4-fc4dcb433a2e\") " Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.666769 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58b7f48b-43e2-432a-89a5-734b082b1b84-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.667177 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5254d426-9927-4afe-aaf4-fc4dcb433a2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5254d426-9927-4afe-aaf4-fc4dcb433a2e" (UID: "5254d426-9927-4afe-aaf4-fc4dcb433a2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.669649 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b7f48b-43e2-432a-89a5-734b082b1b84-kube-api-access-rbjst" (OuterVolumeSpecName: "kube-api-access-rbjst") pod "58b7f48b-43e2-432a-89a5-734b082b1b84" (UID: "58b7f48b-43e2-432a-89a5-734b082b1b84"). InnerVolumeSpecName "kube-api-access-rbjst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.675569 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5254d426-9927-4afe-aaf4-fc4dcb433a2e-kube-api-access-8vv2f" (OuterVolumeSpecName: "kube-api-access-8vv2f") pod "5254d426-9927-4afe-aaf4-fc4dcb433a2e" (UID: "5254d426-9927-4afe-aaf4-fc4dcb433a2e"). InnerVolumeSpecName "kube-api-access-8vv2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.768522 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/5254d426-9927-4afe-aaf4-fc4dcb433a2e-kube-api-access-8vv2f\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.768552 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5254d426-9927-4afe-aaf4-fc4dcb433a2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:53 crc kubenswrapper[5001]: I0128 17:44:53.768563 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbjst\" (UniqueName: \"kubernetes.io/projected/58b7f48b-43e2-432a-89a5-734b082b1b84-kube-api-access-rbjst\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.186485 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.186517 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell05d7a-account-delete-ldg9g" event={"ID":"5254d426-9927-4afe-aaf4-fc4dcb433a2e","Type":"ContainerDied","Data":"9b8699a7333575a04882737ec1a47dd7aab2b87e509e489552447b25da29a3c6"} Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.186568 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8699a7333575a04882737ec1a47dd7aab2b87e509e489552447b25da29a3c6" Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.190535 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" event={"ID":"58b7f48b-43e2-432a-89a5-734b082b1b84","Type":"ContainerDied","Data":"3f1e7f9c986bd1aa78457f388e2050aa7e8c4c0c0e6f02c358bb4bd079e8e9ee"} Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.190571 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1e7f9c986bd1aa78457f388e2050aa7e8c4c0c0e6f02c358bb4bd079e8e9ee" Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.190664 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi4401-account-delete-tvcgk" Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.929601 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-nvnhv"] Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.949151 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-nvnhv"] Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.959332 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv"] Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.968360 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell18e0b-account-delete-bjtg5"] Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.976411 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-8e0b-account-create-update-rcnsv"] Jan 28 17:44:54 crc kubenswrapper[5001]: I0128 17:44:54.984732 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell18e0b-account-delete-bjtg5"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.033024 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hnv54"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.039290 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hnv54"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.064430 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-4401-account-create-update-8l8vr"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.070335 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-4401-account-create-update-8l8vr"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.072470 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi4401-account-delete-tvcgk"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.079748 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi4401-account-delete-tvcgk"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.133630 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-89s8q"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.140847 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-89s8q"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.147402 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell05d7a-account-delete-ldg9g"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.152891 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.158339 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-5d7a-account-create-update-9z49t"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.163364 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell05d7a-account-delete-ldg9g"] Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.199470 5001 generic.go:334] "Generic (PLEG): container finished" podID="e2987317-e3b0-4dce-89ca-cab188e4098e" containerID="be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3" exitCode=0 Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.199508 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"e2987317-e3b0-4dce-89ca-cab188e4098e","Type":"ContainerDied","Data":"be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3"} Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.714506 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.903069 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2987317-e3b0-4dce-89ca-cab188e4098e-config-data\") pod \"e2987317-e3b0-4dce-89ca-cab188e4098e\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.903113 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq9cp\" (UniqueName: \"kubernetes.io/projected/e2987317-e3b0-4dce-89ca-cab188e4098e-kube-api-access-gq9cp\") pod \"e2987317-e3b0-4dce-89ca-cab188e4098e\" (UID: \"e2987317-e3b0-4dce-89ca-cab188e4098e\") " Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.914786 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2987317-e3b0-4dce-89ca-cab188e4098e-kube-api-access-gq9cp" (OuterVolumeSpecName: "kube-api-access-gq9cp") pod "e2987317-e3b0-4dce-89ca-cab188e4098e" (UID: "e2987317-e3b0-4dce-89ca-cab188e4098e"). InnerVolumeSpecName "kube-api-access-gq9cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:55 crc kubenswrapper[5001]: I0128 17:44:55.923433 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2987317-e3b0-4dce-89ca-cab188e4098e-config-data" (OuterVolumeSpecName: "config-data") pod "e2987317-e3b0-4dce-89ca-cab188e4098e" (UID: "e2987317-e3b0-4dce-89ca-cab188e4098e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.006187 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2987317-e3b0-4dce-89ca-cab188e4098e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.006439 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq9cp\" (UniqueName: \"kubernetes.io/projected/e2987317-e3b0-4dce-89ca-cab188e4098e-kube-api-access-gq9cp\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.213532 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"e2987317-e3b0-4dce-89ca-cab188e4098e","Type":"ContainerDied","Data":"09aa19b80e20e4aca0091a90db9c07946fd43cc07f26fdfab9a3b4aa87ca53ad"} Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.213622 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.214635 5001 scope.go:117] "RemoveContainer" containerID="be5c882ff449b6ace9f727bacab6436af42218249118a7cd2f38355f1d49ebc3" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.252918 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.263429 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.594540 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:44:56 crc kubenswrapper[5001]: E0128 17:44:56.594783 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.602335 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef94c46-4f20-4956-a75a-ae044c4c64a9" path="/var/lib/kubelet/pods/0ef94c46-4f20-4956-a75a-ae044c4c64a9/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.603036 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4cae00-3e87-4f4b-9222-ef8633872283" path="/var/lib/kubelet/pods/3e4cae00-3e87-4f4b-9222-ef8633872283/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.603674 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41838942-06bc-4442-b961-191b29830fa1" path="/var/lib/kubelet/pods/41838942-06bc-4442-b961-191b29830fa1/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.604340 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5254d426-9927-4afe-aaf4-fc4dcb433a2e" path="/var/lib/kubelet/pods/5254d426-9927-4afe-aaf4-fc4dcb433a2e/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.605614 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b7f48b-43e2-432a-89a5-734b082b1b84" path="/var/lib/kubelet/pods/58b7f48b-43e2-432a-89a5-734b082b1b84/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.606251 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3" path="/var/lib/kubelet/pods/7ab73ab0-06b7-4ac7-a0b3-4aec0e60bca3/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.606909 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1187ee-bb5c-4107-87b8-eee20cc5ef51" path="/var/lib/kubelet/pods/8e1187ee-bb5c-4107-87b8-eee20cc5ef51/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.608177 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec96a81-f6e1-4b8d-8955-cb7e63ae243d" path="/var/lib/kubelet/pods/bec96a81-f6e1-4b8d-8955-cb7e63ae243d/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.608834 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2987317-e3b0-4dce-89ca-cab188e4098e" path="/var/lib/kubelet/pods/e2987317-e3b0-4dce-89ca-cab188e4098e/volumes" Jan 28 17:44:56 crc kubenswrapper[5001]: I0128 17:44:56.609475 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fafaadec-e27a-42a8-86e3-b128add5edc3" path="/var/lib/kubelet/pods/fafaadec-e27a-42a8-86e3-b128add5edc3/volumes" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.025988 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-z89s5"] Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026310 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026326 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026340 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b7f48b-43e2-432a-89a5-734b082b1b84" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026346 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b7f48b-43e2-432a-89a5-734b082b1b84" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026369 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026376 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026385 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-log" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026391 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-log" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026402 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41838942-06bc-4442-b961-191b29830fa1" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026407 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="41838942-06bc-4442-b961-191b29830fa1" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026414 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0354446b-b372-4934-bf4e-43ecd798ca5c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026421 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0354446b-b372-4934-bf4e-43ecd798ca5c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026435 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2987317-e3b0-4dce-89ca-cab188e4098e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026443 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2987317-e3b0-4dce-89ca-cab188e4098e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:57 crc kubenswrapper[5001]: E0128 17:44:57.026461 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5254d426-9927-4afe-aaf4-fc4dcb433a2e" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026468 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5254d426-9927-4afe-aaf4-fc4dcb433a2e" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026613 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="faac14b5-16bd-4d6d-99b0-9bb7ba9ecb2d" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026626 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="41838942-06bc-4442-b961-191b29830fa1" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026637 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b7f48b-43e2-432a-89a5-734b082b1b84" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026652 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-log" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026661 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5254d426-9927-4afe-aaf4-fc4dcb433a2e" containerName="mariadb-account-delete" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026671 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b238e8f-2244-480f-86f8-a5262f531e04" containerName="nova-kuttl-metadata-metadata" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026681 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0354446b-b372-4934-bf4e-43ecd798ca5c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.026696 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2987317-e3b0-4dce-89ca-cab188e4098e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.027233 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.035895 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z89s5"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.129255 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqnd\" (UniqueName: \"kubernetes.io/projected/687d3405-fbd8-4494-9398-c9e4be313cd9-kube-api-access-dhqnd\") pod \"nova-api-db-create-z89s5\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.129355 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687d3405-fbd8-4494-9398-c9e4be313cd9-operator-scripts\") pod \"nova-api-db-create-z89s5\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.129709 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-s9qwz"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.130753 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.138188 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-s9qwz"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.230869 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-operator-scripts\") pod \"nova-cell0-db-create-s9qwz\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.230946 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687d3405-fbd8-4494-9398-c9e4be313cd9-operator-scripts\") pod \"nova-api-db-create-z89s5\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.231069 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt4sn\" (UniqueName: \"kubernetes.io/projected/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-kube-api-access-pt4sn\") pod \"nova-cell0-db-create-s9qwz\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.231124 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqnd\" (UniqueName: \"kubernetes.io/projected/687d3405-fbd8-4494-9398-c9e4be313cd9-kube-api-access-dhqnd\") pod \"nova-api-db-create-z89s5\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.231766 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687d3405-fbd8-4494-9398-c9e4be313cd9-operator-scripts\") pod \"nova-api-db-create-z89s5\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.240062 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.241548 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.244819 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.247702 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.248594 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqnd\" (UniqueName: \"kubernetes.io/projected/687d3405-fbd8-4494-9398-c9e4be313cd9-kube-api-access-dhqnd\") pod \"nova-api-db-create-z89s5\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.330602 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-7q4dw"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.332084 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.333222 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt4sn\" (UniqueName: \"kubernetes.io/projected/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-kube-api-access-pt4sn\") pod \"nova-cell0-db-create-s9qwz\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.333553 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-operator-scripts\") pod \"nova-cell0-db-create-s9qwz\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.334403 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-operator-scripts\") pod \"nova-cell0-db-create-s9qwz\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.340820 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-7q4dw"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.347476 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.351911 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt4sn\" (UniqueName: \"kubernetes.io/projected/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-kube-api-access-pt4sn\") pod \"nova-cell0-db-create-s9qwz\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.439054 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9bwv\" (UniqueName: \"kubernetes.io/projected/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-kube-api-access-z9bwv\") pod \"nova-cell1-db-create-7q4dw\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.439144 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrrn\" (UniqueName: \"kubernetes.io/projected/a7ca8ffb-f4c8-4394-9b96-85de383793b8-kube-api-access-bqrrn\") pod \"nova-api-eb14-account-create-update-bfbhj\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.439198 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7ca8ffb-f4c8-4394-9b96-85de383793b8-operator-scripts\") pod \"nova-api-eb14-account-create-update-bfbhj\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.439389 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-operator-scripts\") pod \"nova-cell1-db-create-7q4dw\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.440552 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.441438 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.443267 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.449606 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.454447 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.541812 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9bwv\" (UniqueName: \"kubernetes.io/projected/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-kube-api-access-z9bwv\") pod \"nova-cell1-db-create-7q4dw\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.541877 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrrn\" (UniqueName: \"kubernetes.io/projected/a7ca8ffb-f4c8-4394-9b96-85de383793b8-kube-api-access-bqrrn\") pod \"nova-api-eb14-account-create-update-bfbhj\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.541936 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtbn\" (UniqueName: \"kubernetes.io/projected/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-kube-api-access-5qtbn\") pod \"nova-cell0-459a-account-create-update-tgh8r\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.541988 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7ca8ffb-f4c8-4394-9b96-85de383793b8-operator-scripts\") pod \"nova-api-eb14-account-create-update-bfbhj\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.542020 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-operator-scripts\") pod \"nova-cell0-459a-account-create-update-tgh8r\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.542062 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-operator-scripts\") pod \"nova-cell1-db-create-7q4dw\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.542868 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-operator-scripts\") pod \"nova-cell1-db-create-7q4dw\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.542928 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7ca8ffb-f4c8-4394-9b96-85de383793b8-operator-scripts\") pod \"nova-api-eb14-account-create-update-bfbhj\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.565154 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9bwv\" (UniqueName: \"kubernetes.io/projected/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-kube-api-access-z9bwv\") pod \"nova-cell1-db-create-7q4dw\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.564697 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrrn\" (UniqueName: \"kubernetes.io/projected/a7ca8ffb-f4c8-4394-9b96-85de383793b8-kube-api-access-bqrrn\") pod \"nova-api-eb14-account-create-update-bfbhj\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.590146 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.643118 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qtbn\" (UniqueName: \"kubernetes.io/projected/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-kube-api-access-5qtbn\") pod \"nova-cell0-459a-account-create-update-tgh8r\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.643180 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-operator-scripts\") pod \"nova-cell0-459a-account-create-update-tgh8r\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.644110 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-operator-scripts\") pod \"nova-cell0-459a-account-create-update-tgh8r\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.654693 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.655562 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.659282 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.662719 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qtbn\" (UniqueName: \"kubernetes.io/projected/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-kube-api-access-5qtbn\") pod \"nova-cell0-459a-account-create-update-tgh8r\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.671043 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.708060 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.764717 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.818425 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z89s5"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.848449 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c50528-48fd-4603-9f3c-6217d58ca8d1-operator-scripts\") pod \"nova-cell1-2af7-account-create-update-f6zpn\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.848573 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gtg\" (UniqueName: \"kubernetes.io/projected/83c50528-48fd-4603-9f3c-6217d58ca8d1-kube-api-access-c4gtg\") pod \"nova-cell1-2af7-account-create-update-f6zpn\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.930001 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-s9qwz"] Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.949743 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c50528-48fd-4603-9f3c-6217d58ca8d1-operator-scripts\") pod \"nova-cell1-2af7-account-create-update-f6zpn\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.949835 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4gtg\" (UniqueName: \"kubernetes.io/projected/83c50528-48fd-4603-9f3c-6217d58ca8d1-kube-api-access-c4gtg\") pod \"nova-cell1-2af7-account-create-update-f6zpn\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.950509 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c50528-48fd-4603-9f3c-6217d58ca8d1-operator-scripts\") pod \"nova-cell1-2af7-account-create-update-f6zpn\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: W0128 17:44:57.950565 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0906dfe4_e13d_4c1b_a310_9ece4e46a3d6.slice/crio-a84250122aed1722129a78396a14ccbd08a220185eb0a722c6ec3966956853b3 WatchSource:0}: Error finding container a84250122aed1722129a78396a14ccbd08a220185eb0a722c6ec3966956853b3: Status 404 returned error can't find the container with id a84250122aed1722129a78396a14ccbd08a220185eb0a722c6ec3966956853b3 Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.973421 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4gtg\" (UniqueName: \"kubernetes.io/projected/83c50528-48fd-4603-9f3c-6217d58ca8d1-kube-api-access-c4gtg\") pod \"nova-cell1-2af7-account-create-update-f6zpn\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:57 crc kubenswrapper[5001]: I0128 17:44:57.982340 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.063439 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj"] Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.173452 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-7q4dw"] Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.246068 5001 scope.go:117] "RemoveContainer" containerID="51b4d49360b3ecb917a4cd4ce74f9ae094e15c8ac17f60226de6a784bc187fef" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.259352 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r"] Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.265276 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" event={"ID":"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6","Type":"ContainerStarted","Data":"546186fa0e66d896b475806941c2fca4ee9931096ff4285d377bcf06849f2a5f"} Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.265617 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" event={"ID":"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6","Type":"ContainerStarted","Data":"a84250122aed1722129a78396a14ccbd08a220185eb0a722c6ec3966956853b3"} Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.268958 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z89s5" event={"ID":"687d3405-fbd8-4494-9398-c9e4be313cd9","Type":"ContainerStarted","Data":"458d8261145e83b564a17a3cc33c1cb7b78a947e358ba22ded142be1e9fb319e"} Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.269089 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z89s5" event={"ID":"687d3405-fbd8-4494-9398-c9e4be313cd9","Type":"ContainerStarted","Data":"ac3bc8cb6524b52e19394ff60420edd6669509cc44391ca3ec131cfebf3d439a"} Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.275571 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" event={"ID":"a7ca8ffb-f4c8-4394-9b96-85de383793b8","Type":"ContainerStarted","Data":"d1a742eeb08605f70f9ec2ea1ab89581133ae25b9b86e42981e6cf16776d2a30"} Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.276535 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" event={"ID":"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b","Type":"ContainerStarted","Data":"6cadc554c5b0aa298f0c27e8ec1451f1803cc2043317636f2e918c76577a9ff6"} Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.288095 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" podStartSLOduration=1.288077208 podStartE2EDuration="1.288077208s" podCreationTimestamp="2026-01-28 17:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:58.285168984 +0000 UTC m=+1744.452957214" watchObservedRunningTime="2026-01-28 17:44:58.288077208 +0000 UTC m=+1744.455865438" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.303020 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" podStartSLOduration=1.303002979 podStartE2EDuration="1.303002979s" podCreationTimestamp="2026-01-28 17:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:44:58.302279838 +0000 UTC m=+1744.470068078" watchObservedRunningTime="2026-01-28 17:44:58.303002979 +0000 UTC m=+1744.470791209" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.448074 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn"] Jan 28 17:44:58 crc kubenswrapper[5001]: W0128 17:44:58.486115 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c50528_48fd_4603_9f3c_6217d58ca8d1.slice/crio-98115a9cb7ecf3760599a5851546f1ee4767b29130fe872cbbf891b362c680c4 WatchSource:0}: Error finding container 98115a9cb7ecf3760599a5851546f1ee4767b29130fe872cbbf891b362c680c4: Status 404 returned error can't find the container with id 98115a9cb7ecf3760599a5851546f1ee4767b29130fe872cbbf891b362c680c4 Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.634501 5001 scope.go:117] "RemoveContainer" containerID="43a6c0803bbe577b3aff6c88dcced1a196833ca0aabba98bdd67435c7fc96bf3" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.665522 5001 scope.go:117] "RemoveContainer" containerID="735c5592929a7c0fd8d64b746274f8d9d69cd8f431a6ff820aa773fdca855bc0" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.714666 5001 scope.go:117] "RemoveContainer" containerID="a04f03da1cb8d0884a0bc28984ccb8c9152297383f7b20360a835265e6883ea4" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.759335 5001 scope.go:117] "RemoveContainer" containerID="0ecdb842c25423d1391be705b257066d0cfb83c0d3e3d8a9856b607174aed0e7" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.782770 5001 scope.go:117] "RemoveContainer" containerID="2129a9942767ce256fd3be53a912afbb591634eaf8807091119ca4627cc2a8c0" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.805481 5001 scope.go:117] "RemoveContainer" containerID="864741892b7f5fd75b65c6e4079fc862e15f9c9285bf1db9636cf43d72ff2968" Jan 28 17:44:58 crc kubenswrapper[5001]: I0128 17:44:58.848788 5001 scope.go:117] "RemoveContainer" containerID="2363bafdad4f4444eba4fa9e8b59218aae530e7daf3740d20c0fdca2c4417bf3" Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.285622 5001 generic.go:334] "Generic (PLEG): container finished" podID="0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" containerID="546186fa0e66d896b475806941c2fca4ee9931096ff4285d377bcf06849f2a5f" exitCode=0 Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.285793 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" event={"ID":"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6","Type":"ContainerDied","Data":"546186fa0e66d896b475806941c2fca4ee9931096ff4285d377bcf06849f2a5f"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.287629 5001 generic.go:334] "Generic (PLEG): container finished" podID="16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" containerID="669d05e2117aea299f4eb92c64c1f9cafa29ac86509cc08c36358b2492568eba" exitCode=0 Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.287691 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" event={"ID":"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b","Type":"ContainerDied","Data":"669d05e2117aea299f4eb92c64c1f9cafa29ac86509cc08c36358b2492568eba"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.292149 5001 generic.go:334] "Generic (PLEG): container finished" podID="83c50528-48fd-4603-9f3c-6217d58ca8d1" containerID="b0fc950e6c91feb62b813953d3f483a2d8fa53d0748a257d53153e06dc05a1c1" exitCode=0 Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.292244 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" event={"ID":"83c50528-48fd-4603-9f3c-6217d58ca8d1","Type":"ContainerDied","Data":"b0fc950e6c91feb62b813953d3f483a2d8fa53d0748a257d53153e06dc05a1c1"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.292290 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" event={"ID":"83c50528-48fd-4603-9f3c-6217d58ca8d1","Type":"ContainerStarted","Data":"98115a9cb7ecf3760599a5851546f1ee4767b29130fe872cbbf891b362c680c4"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.294042 5001 generic.go:334] "Generic (PLEG): container finished" podID="8b05d2d1-cd75-4f7f-b89b-82192c1fb216" containerID="0add5320ee9f94e72e66057cbdd5c5582c1a55e64dcd04c539201497277cdbcc" exitCode=0 Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.294125 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" event={"ID":"8b05d2d1-cd75-4f7f-b89b-82192c1fb216","Type":"ContainerDied","Data":"0add5320ee9f94e72e66057cbdd5c5582c1a55e64dcd04c539201497277cdbcc"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.294170 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" event={"ID":"8b05d2d1-cd75-4f7f-b89b-82192c1fb216","Type":"ContainerStarted","Data":"49b75b5feb29144ce12ca17ba567c0f6ea2a7f373bfd322e28aab03580855744"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.296304 5001 generic.go:334] "Generic (PLEG): container finished" podID="687d3405-fbd8-4494-9398-c9e4be313cd9" containerID="458d8261145e83b564a17a3cc33c1cb7b78a947e358ba22ded142be1e9fb319e" exitCode=0 Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.296339 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z89s5" event={"ID":"687d3405-fbd8-4494-9398-c9e4be313cd9","Type":"ContainerDied","Data":"458d8261145e83b564a17a3cc33c1cb7b78a947e358ba22ded142be1e9fb319e"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.298041 5001 generic.go:334] "Generic (PLEG): container finished" podID="a7ca8ffb-f4c8-4394-9b96-85de383793b8" containerID="63b1a0ff0c5d0e505261573740bb2d672dfbb64dc3a2824ffbde86462238ad5a" exitCode=0 Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.298164 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" event={"ID":"a7ca8ffb-f4c8-4394-9b96-85de383793b8","Type":"ContainerDied","Data":"63b1a0ff0c5d0e505261573740bb2d672dfbb64dc3a2824ffbde86462238ad5a"} Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.619749 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.782062 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687d3405-fbd8-4494-9398-c9e4be313cd9-operator-scripts\") pod \"687d3405-fbd8-4494-9398-c9e4be313cd9\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.782172 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhqnd\" (UniqueName: \"kubernetes.io/projected/687d3405-fbd8-4494-9398-c9e4be313cd9-kube-api-access-dhqnd\") pod \"687d3405-fbd8-4494-9398-c9e4be313cd9\" (UID: \"687d3405-fbd8-4494-9398-c9e4be313cd9\") " Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.783043 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687d3405-fbd8-4494-9398-c9e4be313cd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "687d3405-fbd8-4494-9398-c9e4be313cd9" (UID: "687d3405-fbd8-4494-9398-c9e4be313cd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.788160 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687d3405-fbd8-4494-9398-c9e4be313cd9-kube-api-access-dhqnd" (OuterVolumeSpecName: "kube-api-access-dhqnd") pod "687d3405-fbd8-4494-9398-c9e4be313cd9" (UID: "687d3405-fbd8-4494-9398-c9e4be313cd9"). InnerVolumeSpecName "kube-api-access-dhqnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.883967 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhqnd\" (UniqueName: \"kubernetes.io/projected/687d3405-fbd8-4494-9398-c9e4be313cd9-kube-api-access-dhqnd\") on node \"crc\" DevicePath \"\"" Jan 28 17:44:59 crc kubenswrapper[5001]: I0128 17:44:59.884043 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687d3405-fbd8-4494-9398-c9e4be313cd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.155666 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h"] Jan 28 17:45:00 crc kubenswrapper[5001]: E0128 17:45:00.156512 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687d3405-fbd8-4494-9398-c9e4be313cd9" containerName="mariadb-database-create" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.156539 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="687d3405-fbd8-4494-9398-c9e4be313cd9" containerName="mariadb-database-create" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.156740 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="687d3405-fbd8-4494-9398-c9e4be313cd9" containerName="mariadb-database-create" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.157400 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.159943 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.159985 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.193652 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h"] Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.221785 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc"] Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.229263 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-6xjbc"] Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.237220 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds"] Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.248248 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6xrds"] Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.290786 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-secret-volume\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.290871 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-config-volume\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.291047 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sb6\" (UniqueName: \"kubernetes.io/projected/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-kube-api-access-r6sb6\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.311359 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z89s5" event={"ID":"687d3405-fbd8-4494-9398-c9e4be313cd9","Type":"ContainerDied","Data":"ac3bc8cb6524b52e19394ff60420edd6669509cc44391ca3ec131cfebf3d439a"} Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.311520 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3bc8cb6524b52e19394ff60420edd6669509cc44391ca3ec131cfebf3d439a" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.311565 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z89s5" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.395240 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-secret-volume\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.395293 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-config-volume\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.396302 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-config-volume\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.396370 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sb6\" (UniqueName: \"kubernetes.io/projected/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-kube-api-access-r6sb6\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.414878 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-secret-volume\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.420449 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sb6\" (UniqueName: \"kubernetes.io/projected/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-kube-api-access-r6sb6\") pod \"collect-profiles-29493705-7rq6h\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.489087 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.606543 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd938d1-8621-4b3b-acad-d28619d75ca3" path="/var/lib/kubelet/pods/4bd938d1-8621-4b3b-acad-d28619d75ca3/volumes" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.607888 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf809582-ac7b-428b-9d93-9724bc2edccf" path="/var/lib/kubelet/pods/bf809582-ac7b-428b-9d93-9724bc2edccf/volumes" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.670690 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.790081 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.805552 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qtbn\" (UniqueName: \"kubernetes.io/projected/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-kube-api-access-5qtbn\") pod \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.805706 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-operator-scripts\") pod \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\" (UID: \"8b05d2d1-cd75-4f7f-b89b-82192c1fb216\") " Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.805748 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.807169 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b05d2d1-cd75-4f7f-b89b-82192c1fb216" (UID: "8b05d2d1-cd75-4f7f-b89b-82192c1fb216"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.821114 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-kube-api-access-5qtbn" (OuterVolumeSpecName: "kube-api-access-5qtbn") pod "8b05d2d1-cd75-4f7f-b89b-82192c1fb216" (UID: "8b05d2d1-cd75-4f7f-b89b-82192c1fb216"). InnerVolumeSpecName "kube-api-access-5qtbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.834945 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.907701 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9bwv\" (UniqueName: \"kubernetes.io/projected/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-kube-api-access-z9bwv\") pod \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.907812 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-operator-scripts\") pod \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\" (UID: \"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b\") " Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.908352 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" (UID: "16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.908415 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c50528-48fd-4603-9f3c-6217d58ca8d1-operator-scripts\") pod \"83c50528-48fd-4603-9f3c-6217d58ca8d1\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.908453 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4gtg\" (UniqueName: \"kubernetes.io/projected/83c50528-48fd-4603-9f3c-6217d58ca8d1-kube-api-access-c4gtg\") pod \"83c50528-48fd-4603-9f3c-6217d58ca8d1\" (UID: \"83c50528-48fd-4603-9f3c-6217d58ca8d1\") " Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.908796 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qtbn\" (UniqueName: \"kubernetes.io/projected/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-kube-api-access-5qtbn\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.908815 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b05d2d1-cd75-4f7f-b89b-82192c1fb216-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.908823 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.909223 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c50528-48fd-4603-9f3c-6217d58ca8d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83c50528-48fd-4603-9f3c-6217d58ca8d1" (UID: "83c50528-48fd-4603-9f3c-6217d58ca8d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.911634 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-kube-api-access-z9bwv" (OuterVolumeSpecName: "kube-api-access-z9bwv") pod "16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" (UID: "16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b"). InnerVolumeSpecName "kube-api-access-z9bwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.911740 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c50528-48fd-4603-9f3c-6217d58ca8d1-kube-api-access-c4gtg" (OuterVolumeSpecName: "kube-api-access-c4gtg") pod "83c50528-48fd-4603-9f3c-6217d58ca8d1" (UID: "83c50528-48fd-4603-9f3c-6217d58ca8d1"). InnerVolumeSpecName "kube-api-access-c4gtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:00 crc kubenswrapper[5001]: I0128 17:45:00.935331 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.009666 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-operator-scripts\") pod \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.009894 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt4sn\" (UniqueName: \"kubernetes.io/projected/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-kube-api-access-pt4sn\") pod \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\" (UID: \"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6\") " Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.010277 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4gtg\" (UniqueName: \"kubernetes.io/projected/83c50528-48fd-4603-9f3c-6217d58ca8d1-kube-api-access-c4gtg\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.010295 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9bwv\" (UniqueName: \"kubernetes.io/projected/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b-kube-api-access-z9bwv\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.010305 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c50528-48fd-4603-9f3c-6217d58ca8d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.010344 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" (UID: "0906dfe4-e13d-4c1b-a310-9ece4e46a3d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.012737 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-kube-api-access-pt4sn" (OuterVolumeSpecName: "kube-api-access-pt4sn") pod "0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" (UID: "0906dfe4-e13d-4c1b-a310-9ece4e46a3d6"). InnerVolumeSpecName "kube-api-access-pt4sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.075581 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h"] Jan 28 17:45:01 crc kubenswrapper[5001]: W0128 17:45:01.082796 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64ece945_6b6d_41d4_a73e_8f9ac057a9e8.slice/crio-756a457632609fbbb0c8bf1976f70be93c9f0783b086f4bc4d0966c3b2a8abbb WatchSource:0}: Error finding container 756a457632609fbbb0c8bf1976f70be93c9f0783b086f4bc4d0966c3b2a8abbb: Status 404 returned error can't find the container with id 756a457632609fbbb0c8bf1976f70be93c9f0783b086f4bc4d0966c3b2a8abbb Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.111824 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqrrn\" (UniqueName: \"kubernetes.io/projected/a7ca8ffb-f4c8-4394-9b96-85de383793b8-kube-api-access-bqrrn\") pod \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.111953 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7ca8ffb-f4c8-4394-9b96-85de383793b8-operator-scripts\") pod \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\" (UID: \"a7ca8ffb-f4c8-4394-9b96-85de383793b8\") " Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.112442 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt4sn\" (UniqueName: \"kubernetes.io/projected/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-kube-api-access-pt4sn\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.112462 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.112471 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ca8ffb-f4c8-4394-9b96-85de383793b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7ca8ffb-f4c8-4394-9b96-85de383793b8" (UID: "a7ca8ffb-f4c8-4394-9b96-85de383793b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.115142 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ca8ffb-f4c8-4394-9b96-85de383793b8-kube-api-access-bqrrn" (OuterVolumeSpecName: "kube-api-access-bqrrn") pod "a7ca8ffb-f4c8-4394-9b96-85de383793b8" (UID: "a7ca8ffb-f4c8-4394-9b96-85de383793b8"). InnerVolumeSpecName "kube-api-access-bqrrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.214130 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqrrn\" (UniqueName: \"kubernetes.io/projected/a7ca8ffb-f4c8-4394-9b96-85de383793b8-kube-api-access-bqrrn\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.214181 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7ca8ffb-f4c8-4394-9b96-85de383793b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.334259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" event={"ID":"83c50528-48fd-4603-9f3c-6217d58ca8d1","Type":"ContainerDied","Data":"98115a9cb7ecf3760599a5851546f1ee4767b29130fe872cbbf891b362c680c4"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.334311 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98115a9cb7ecf3760599a5851546f1ee4767b29130fe872cbbf891b362c680c4" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.334384 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.348053 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" event={"ID":"64ece945-6b6d-41d4-a73e-8f9ac057a9e8","Type":"ContainerStarted","Data":"564037f1340f5e0fde4985bb4d77ce94105d7fa5b94f04c2e96c75c730f628ee"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.348837 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" event={"ID":"64ece945-6b6d-41d4-a73e-8f9ac057a9e8","Type":"ContainerStarted","Data":"756a457632609fbbb0c8bf1976f70be93c9f0783b086f4bc4d0966c3b2a8abbb"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.350884 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" event={"ID":"0906dfe4-e13d-4c1b-a310-9ece4e46a3d6","Type":"ContainerDied","Data":"a84250122aed1722129a78396a14ccbd08a220185eb0a722c6ec3966956853b3"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.350924 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84250122aed1722129a78396a14ccbd08a220185eb0a722c6ec3966956853b3" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.350997 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-s9qwz" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.359841 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.359859 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r" event={"ID":"8b05d2d1-cd75-4f7f-b89b-82192c1fb216","Type":"ContainerDied","Data":"49b75b5feb29144ce12ca17ba567c0f6ea2a7f373bfd322e28aab03580855744"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.359904 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b75b5feb29144ce12ca17ba567c0f6ea2a7f373bfd322e28aab03580855744" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.361477 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.361589 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj" event={"ID":"a7ca8ffb-f4c8-4394-9b96-85de383793b8","Type":"ContainerDied","Data":"d1a742eeb08605f70f9ec2ea1ab89581133ae25b9b86e42981e6cf16776d2a30"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.361713 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a742eeb08605f70f9ec2ea1ab89581133ae25b9b86e42981e6cf16776d2a30" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.366309 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" event={"ID":"16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b","Type":"ContainerDied","Data":"6cadc554c5b0aa298f0c27e8ec1451f1803cc2043317636f2e918c76577a9ff6"} Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.366394 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cadc554c5b0aa298f0c27e8ec1451f1803cc2043317636f2e918c76577a9ff6" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.366478 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-7q4dw" Jan 28 17:45:01 crc kubenswrapper[5001]: I0128 17:45:01.380326 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" podStartSLOduration=1.380289497 podStartE2EDuration="1.380289497s" podCreationTimestamp="2026-01-28 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:01.369500535 +0000 UTC m=+1747.537288765" watchObservedRunningTime="2026-01-28 17:45:01.380289497 +0000 UTC m=+1747.548077727" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.375149 5001 generic.go:334] "Generic (PLEG): container finished" podID="64ece945-6b6d-41d4-a73e-8f9ac057a9e8" containerID="564037f1340f5e0fde4985bb4d77ce94105d7fa5b94f04c2e96c75c730f628ee" exitCode=0 Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.375243 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" event={"ID":"64ece945-6b6d-41d4-a73e-8f9ac057a9e8","Type":"ContainerDied","Data":"564037f1340f5e0fde4985bb4d77ce94105d7fa5b94f04c2e96c75c730f628ee"} Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.643382 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns"] Jan 28 17:45:02 crc kubenswrapper[5001]: E0128 17:45:02.643777 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c50528-48fd-4603-9f3c-6217d58ca8d1" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.643805 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c50528-48fd-4603-9f3c-6217d58ca8d1" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: E0128 17:45:02.643822 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b05d2d1-cd75-4f7f-b89b-82192c1fb216" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.643831 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b05d2d1-cd75-4f7f-b89b-82192c1fb216" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: E0128 17:45:02.643855 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" containerName="mariadb-database-create" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.643862 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" containerName="mariadb-database-create" Jan 28 17:45:02 crc kubenswrapper[5001]: E0128 17:45:02.643904 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" containerName="mariadb-database-create" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.643912 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" containerName="mariadb-database-create" Jan 28 17:45:02 crc kubenswrapper[5001]: E0128 17:45:02.643921 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ca8ffb-f4c8-4394-9b96-85de383793b8" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.643947 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ca8ffb-f4c8-4394-9b96-85de383793b8" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.644124 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b05d2d1-cd75-4f7f-b89b-82192c1fb216" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.644141 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" containerName="mariadb-database-create" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.644149 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c50528-48fd-4603-9f3c-6217d58ca8d1" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.644157 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" containerName="mariadb-database-create" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.644169 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ca8ffb-f4c8-4394-9b96-85de383793b8" containerName="mariadb-account-create-update" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.644763 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.647932 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-jcvhv" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.648286 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.650691 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.656235 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns"] Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.742664 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdwr\" (UniqueName: \"kubernetes.io/projected/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-kube-api-access-lrdwr\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.742968 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.743186 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.845042 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.845124 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.845176 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdwr\" (UniqueName: \"kubernetes.io/projected/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-kube-api-access-lrdwr\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.850798 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.851724 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.862184 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdwr\" (UniqueName: \"kubernetes.io/projected/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-kube-api-access-lrdwr\") pod \"nova-kuttl-cell0-conductor-db-sync-slhns\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.920603 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.921756 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.928540 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.930267 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.943763 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk"] Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.945125 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.947543 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.949152 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.959445 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk"] Jan 28 17:45:02 crc kubenswrapper[5001]: I0128 17:45:02.972442 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.047639 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94ss\" (UniqueName: \"kubernetes.io/projected/53256273-450e-4371-bd25-a1e8c96f2d77-kube-api-access-m94ss\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.048012 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.048088 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53256273-450e-4371-bd25-a1e8c96f2d77-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.048200 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.048255 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vkv\" (UniqueName: \"kubernetes.io/projected/30a2d207-76b3-42da-a996-dcd014b8fcba-kube-api-access-j2vkv\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.069275 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.070304 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.075394 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.082067 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.151065 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94ss\" (UniqueName: \"kubernetes.io/projected/53256273-450e-4371-bd25-a1e8c96f2d77-kube-api-access-m94ss\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.151157 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.151207 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53256273-450e-4371-bd25-a1e8c96f2d77-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.151286 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.151341 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vkv\" (UniqueName: \"kubernetes.io/projected/30a2d207-76b3-42da-a996-dcd014b8fcba-kube-api-access-j2vkv\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.159961 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.174580 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94ss\" (UniqueName: \"kubernetes.io/projected/53256273-450e-4371-bd25-a1e8c96f2d77-kube-api-access-m94ss\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.174874 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53256273-450e-4371-bd25-a1e8c96f2d77-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.179913 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.185577 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vkv\" (UniqueName: \"kubernetes.io/projected/30a2d207-76b3-42da-a996-dcd014b8fcba-kube-api-access-j2vkv\") pod \"nova-kuttl-cell1-conductor-db-sync-wf6nk\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.247313 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.253221 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5490c0-8d10-4966-85b4-e3045a16a80d-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.253337 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652jj\" (UniqueName: \"kubernetes.io/projected/1e5490c0-8d10-4966-85b4-e3045a16a80d-kube-api-access-652jj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.261780 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.355125 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5490c0-8d10-4966-85b4-e3045a16a80d-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.355264 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-652jj\" (UniqueName: \"kubernetes.io/projected/1e5490c0-8d10-4966-85b4-e3045a16a80d-kube-api-access-652jj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.364004 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5490c0-8d10-4966-85b4-e3045a16a80d-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.388257 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-652jj\" (UniqueName: \"kubernetes.io/projected/1e5490c0-8d10-4966-85b4-e3045a16a80d-kube-api-access-652jj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.396453 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:03 crc kubenswrapper[5001]: I0128 17:45:03.781284 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns"] Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.026073 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.028708 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.168335 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.204148 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-secret-volume\") pod \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.204245 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6sb6\" (UniqueName: \"kubernetes.io/projected/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-kube-api-access-r6sb6\") pod \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.204295 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-config-volume\") pod \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\" (UID: \"64ece945-6b6d-41d4-a73e-8f9ac057a9e8\") " Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.205365 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "64ece945-6b6d-41d4-a73e-8f9ac057a9e8" (UID: "64ece945-6b6d-41d4-a73e-8f9ac057a9e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.213138 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64ece945-6b6d-41d4-a73e-8f9ac057a9e8" (UID: "64ece945-6b6d-41d4-a73e-8f9ac057a9e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.213230 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-kube-api-access-r6sb6" (OuterVolumeSpecName: "kube-api-access-r6sb6") pod "64ece945-6b6d-41d4-a73e-8f9ac057a9e8" (UID: "64ece945-6b6d-41d4-a73e-8f9ac057a9e8"). InnerVolumeSpecName "kube-api-access-r6sb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.290320 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk"] Jan 28 17:45:04 crc kubenswrapper[5001]: W0128 17:45:04.293955 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30a2d207_76b3_42da_a996_dcd014b8fcba.slice/crio-6658dded60e7e444b0fea2217d9fac44d4743ed8da149b3a32278d732b76ee1a WatchSource:0}: Error finding container 6658dded60e7e444b0fea2217d9fac44d4743ed8da149b3a32278d732b76ee1a: Status 404 returned error can't find the container with id 6658dded60e7e444b0fea2217d9fac44d4743ed8da149b3a32278d732b76ee1a Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.307200 5001 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.307228 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6sb6\" (UniqueName: \"kubernetes.io/projected/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-kube-api-access-r6sb6\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.307238 5001 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64ece945-6b6d-41d4-a73e-8f9ac057a9e8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.366377 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:45:04 crc kubenswrapper[5001]: W0128 17:45:04.388156 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5490c0_8d10_4966_85b4_e3045a16a80d.slice/crio-790d48bdd2f237bbd3e01334b3a7d8281ab9e6dba1b31ec14843b0af3d89670e WatchSource:0}: Error finding container 790d48bdd2f237bbd3e01334b3a7d8281ab9e6dba1b31ec14843b0af3d89670e: Status 404 returned error can't find the container with id 790d48bdd2f237bbd3e01334b3a7d8281ab9e6dba1b31ec14843b0af3d89670e Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.420865 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" event={"ID":"30a2d207-76b3-42da-a996-dcd014b8fcba","Type":"ContainerStarted","Data":"6658dded60e7e444b0fea2217d9fac44d4743ed8da149b3a32278d732b76ee1a"} Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.426550 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" event={"ID":"64ece945-6b6d-41d4-a73e-8f9ac057a9e8","Type":"ContainerDied","Data":"756a457632609fbbb0c8bf1976f70be93c9f0783b086f4bc4d0966c3b2a8abbb"} Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.426584 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-7rq6h" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.426602 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756a457632609fbbb0c8bf1976f70be93c9f0783b086f4bc4d0966c3b2a8abbb" Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.428431 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"1e5490c0-8d10-4966-85b4-e3045a16a80d","Type":"ContainerStarted","Data":"790d48bdd2f237bbd3e01334b3a7d8281ab9e6dba1b31ec14843b0af3d89670e"} Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.531744 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" event={"ID":"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f","Type":"ContainerStarted","Data":"dcf89bbdac48eae726b0ce1cbdd4f84d75f6641ba7cd5a53c61dfac3f67bd39b"} Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.532141 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" event={"ID":"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f","Type":"ContainerStarted","Data":"8ee884a82b229a575abb52761d1846ca8262aed5382f1dd6c6b77df2a937e912"} Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.534362 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"53256273-450e-4371-bd25-a1e8c96f2d77","Type":"ContainerStarted","Data":"f3e804465c86fff179ad3453dc229629804bed01f372803126312f02d92b7358"} Jan 28 17:45:04 crc kubenswrapper[5001]: I0128 17:45:04.555609 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" podStartSLOduration=2.555586825 podStartE2EDuration="2.555586825s" podCreationTimestamp="2026-01-28 17:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:04.545811073 +0000 UTC m=+1750.713599303" watchObservedRunningTime="2026-01-28 17:45:04.555586825 +0000 UTC m=+1750.723375055" Jan 28 17:45:05 crc kubenswrapper[5001]: I0128 17:45:05.544464 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" event={"ID":"30a2d207-76b3-42da-a996-dcd014b8fcba","Type":"ContainerStarted","Data":"0b2f70b84a1ee0d6e70dc0fbf9538d6596d69a1c11970e88b050d09ea9f0f9c8"} Jan 28 17:45:05 crc kubenswrapper[5001]: I0128 17:45:05.547811 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"1e5490c0-8d10-4966-85b4-e3045a16a80d","Type":"ContainerStarted","Data":"98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05"} Jan 28 17:45:05 crc kubenswrapper[5001]: I0128 17:45:05.596017 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.596001646 podStartE2EDuration="2.596001646s" podCreationTimestamp="2026-01-28 17:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:05.589146278 +0000 UTC m=+1751.756934518" watchObservedRunningTime="2026-01-28 17:45:05.596001646 +0000 UTC m=+1751.763789876" Jan 28 17:45:05 crc kubenswrapper[5001]: I0128 17:45:05.598282 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" podStartSLOduration=3.598271192 podStartE2EDuration="3.598271192s" podCreationTimestamp="2026-01-28 17:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:05.571919461 +0000 UTC m=+1751.739707691" watchObservedRunningTime="2026-01-28 17:45:05.598271192 +0000 UTC m=+1751.766059422" Jan 28 17:45:08 crc kubenswrapper[5001]: I0128 17:45:08.397330 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:08 crc kubenswrapper[5001]: I0128 17:45:08.577211 5001 generic.go:334] "Generic (PLEG): container finished" podID="30a2d207-76b3-42da-a996-dcd014b8fcba" containerID="0b2f70b84a1ee0d6e70dc0fbf9538d6596d69a1c11970e88b050d09ea9f0f9c8" exitCode=0 Jan 28 17:45:08 crc kubenswrapper[5001]: I0128 17:45:08.577294 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" event={"ID":"30a2d207-76b3-42da-a996-dcd014b8fcba","Type":"ContainerDied","Data":"0b2f70b84a1ee0d6e70dc0fbf9538d6596d69a1c11970e88b050d09ea9f0f9c8"} Jan 28 17:45:09 crc kubenswrapper[5001]: I0128 17:45:09.588845 5001 generic.go:334] "Generic (PLEG): container finished" podID="9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" containerID="dcf89bbdac48eae726b0ce1cbdd4f84d75f6641ba7cd5a53c61dfac3f67bd39b" exitCode=0 Jan 28 17:45:09 crc kubenswrapper[5001]: I0128 17:45:09.588918 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" event={"ID":"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f","Type":"ContainerDied","Data":"dcf89bbdac48eae726b0ce1cbdd4f84d75f6641ba7cd5a53c61dfac3f67bd39b"} Jan 28 17:45:11 crc kubenswrapper[5001]: I0128 17:45:11.594130 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:45:11 crc kubenswrapper[5001]: E0128 17:45:11.595045 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:45:13 crc kubenswrapper[5001]: I0128 17:45:13.397324 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:13 crc kubenswrapper[5001]: I0128 17:45:13.408757 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:13 crc kubenswrapper[5001]: I0128 17:45:13.643284 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.736857 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.744261 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.790849 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vkv\" (UniqueName: \"kubernetes.io/projected/30a2d207-76b3-42da-a996-dcd014b8fcba-kube-api-access-j2vkv\") pod \"30a2d207-76b3-42da-a996-dcd014b8fcba\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.791235 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-scripts\") pod \"30a2d207-76b3-42da-a996-dcd014b8fcba\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.791276 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-config-data\") pod \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.791315 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-scripts\") pod \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.791731 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-config-data\") pod \"30a2d207-76b3-42da-a996-dcd014b8fcba\" (UID: \"30a2d207-76b3-42da-a996-dcd014b8fcba\") " Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.792072 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdwr\" (UniqueName: \"kubernetes.io/projected/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-kube-api-access-lrdwr\") pod \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\" (UID: \"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f\") " Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.795508 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-scripts" (OuterVolumeSpecName: "scripts") pod "9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" (UID: "9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.796074 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-scripts" (OuterVolumeSpecName: "scripts") pod "30a2d207-76b3-42da-a996-dcd014b8fcba" (UID: "30a2d207-76b3-42da-a996-dcd014b8fcba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.796451 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a2d207-76b3-42da-a996-dcd014b8fcba-kube-api-access-j2vkv" (OuterVolumeSpecName: "kube-api-access-j2vkv") pod "30a2d207-76b3-42da-a996-dcd014b8fcba" (UID: "30a2d207-76b3-42da-a996-dcd014b8fcba"). InnerVolumeSpecName "kube-api-access-j2vkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.797891 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-kube-api-access-lrdwr" (OuterVolumeSpecName: "kube-api-access-lrdwr") pod "9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" (UID: "9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f"). InnerVolumeSpecName "kube-api-access-lrdwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.812044 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-config-data" (OuterVolumeSpecName: "config-data") pod "9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" (UID: "9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.813999 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-config-data" (OuterVolumeSpecName: "config-data") pod "30a2d207-76b3-42da-a996-dcd014b8fcba" (UID: "30a2d207-76b3-42da-a996-dcd014b8fcba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.894291 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdwr\" (UniqueName: \"kubernetes.io/projected/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-kube-api-access-lrdwr\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.894319 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2vkv\" (UniqueName: \"kubernetes.io/projected/30a2d207-76b3-42da-a996-dcd014b8fcba-kube-api-access-j2vkv\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.894328 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.894337 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.894345 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:14 crc kubenswrapper[5001]: I0128 17:45:14.894353 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a2d207-76b3-42da-a996-dcd014b8fcba-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.654621 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" event={"ID":"30a2d207-76b3-42da-a996-dcd014b8fcba","Type":"ContainerDied","Data":"6658dded60e7e444b0fea2217d9fac44d4743ed8da149b3a32278d732b76ee1a"} Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.655062 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6658dded60e7e444b0fea2217d9fac44d4743ed8da149b3a32278d732b76ee1a" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.654693 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.657045 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.657045 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns" event={"ID":"9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f","Type":"ContainerDied","Data":"8ee884a82b229a575abb52761d1846ca8262aed5382f1dd6c6b77df2a937e912"} Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.657166 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee884a82b229a575abb52761d1846ca8262aed5382f1dd6c6b77df2a937e912" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.659330 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"53256273-450e-4371-bd25-a1e8c96f2d77","Type":"ContainerStarted","Data":"721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca"} Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.660108 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.679122 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.947404848 podStartE2EDuration="13.679103866s" podCreationTimestamp="2026-01-28 17:45:02 +0000 UTC" firstStartedPulling="2026-01-28 17:45:04.028449834 +0000 UTC m=+1750.196238064" lastFinishedPulling="2026-01-28 17:45:14.760148852 +0000 UTC m=+1760.927937082" observedRunningTime="2026-01-28 17:45:15.676939654 +0000 UTC m=+1761.844727884" watchObservedRunningTime="2026-01-28 17:45:15.679103866 +0000 UTC m=+1761.846892116" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.689416 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811208 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:45:15 crc kubenswrapper[5001]: E0128 17:45:15.811641 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811653 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:45:15 crc kubenswrapper[5001]: E0128 17:45:15.811678 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ece945-6b6d-41d4-a73e-8f9ac057a9e8" containerName="collect-profiles" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811685 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ece945-6b6d-41d4-a73e-8f9ac057a9e8" containerName="collect-profiles" Jan 28 17:45:15 crc kubenswrapper[5001]: E0128 17:45:15.811707 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a2d207-76b3-42da-a996-dcd014b8fcba" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811713 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a2d207-76b3-42da-a996-dcd014b8fcba" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811894 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ece945-6b6d-41d4-a73e-8f9ac057a9e8" containerName="collect-profiles" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811910 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a2d207-76b3-42da-a996-dcd014b8fcba" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.811921 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.812672 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.814844 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.867306 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.891628 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.893174 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.895358 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.897782 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.915040 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c463cbc-3189-47fd-8b1d-33c332bebcb3-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:15 crc kubenswrapper[5001]: I0128 17:45:15.915178 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ppl\" (UniqueName: \"kubernetes.io/projected/7c463cbc-3189-47fd-8b1d-33c332bebcb3-kube-api-access-s4ppl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.018487 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ppl\" (UniqueName: \"kubernetes.io/projected/7c463cbc-3189-47fd-8b1d-33c332bebcb3-kube-api-access-s4ppl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.018661 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c463cbc-3189-47fd-8b1d-33c332bebcb3-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.018776 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bfd635-f134-4e2a-9921-5efdd3f205fc-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.018901 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltk6j\" (UniqueName: \"kubernetes.io/projected/14bfd635-f134-4e2a-9921-5efdd3f205fc-kube-api-access-ltk6j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.025636 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c463cbc-3189-47fd-8b1d-33c332bebcb3-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.039604 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ppl\" (UniqueName: \"kubernetes.io/projected/7c463cbc-3189-47fd-8b1d-33c332bebcb3-kube-api-access-s4ppl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.121120 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bfd635-f134-4e2a-9921-5efdd3f205fc-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.121205 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltk6j\" (UniqueName: \"kubernetes.io/projected/14bfd635-f134-4e2a-9921-5efdd3f205fc-kube-api-access-ltk6j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.134824 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bfd635-f134-4e2a-9921-5efdd3f205fc-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.139957 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.148490 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltk6j\" (UniqueName: \"kubernetes.io/projected/14bfd635-f134-4e2a-9921-5efdd3f205fc-kube-api-access-ltk6j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.215705 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.580106 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.670826 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7c463cbc-3189-47fd-8b1d-33c332bebcb3","Type":"ContainerStarted","Data":"c3d148abaa915dc8e4ba979497878f5ea648d0b49a38f22850cba75c4fc14ee7"} Jan 28 17:45:16 crc kubenswrapper[5001]: I0128 17:45:16.680355 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:45:16 crc kubenswrapper[5001]: W0128 17:45:16.684242 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14bfd635_f134_4e2a_9921_5efdd3f205fc.slice/crio-9645dc3f1006b4f3596989e60c2bb95e49d637eb4127cc5c52a0ca68388bb201 WatchSource:0}: Error finding container 9645dc3f1006b4f3596989e60c2bb95e49d637eb4127cc5c52a0ca68388bb201: Status 404 returned error can't find the container with id 9645dc3f1006b4f3596989e60c2bb95e49d637eb4127cc5c52a0ca68388bb201 Jan 28 17:45:17 crc kubenswrapper[5001]: I0128 17:45:17.684876 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"14bfd635-f134-4e2a-9921-5efdd3f205fc","Type":"ContainerStarted","Data":"bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936"} Jan 28 17:45:17 crc kubenswrapper[5001]: I0128 17:45:17.685662 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"14bfd635-f134-4e2a-9921-5efdd3f205fc","Type":"ContainerStarted","Data":"9645dc3f1006b4f3596989e60c2bb95e49d637eb4127cc5c52a0ca68388bb201"} Jan 28 17:45:17 crc kubenswrapper[5001]: I0128 17:45:17.685815 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:17 crc kubenswrapper[5001]: I0128 17:45:17.693813 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7c463cbc-3189-47fd-8b1d-33c332bebcb3","Type":"ContainerStarted","Data":"4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65"} Jan 28 17:45:17 crc kubenswrapper[5001]: I0128 17:45:17.706690 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.706664843 podStartE2EDuration="2.706664843s" podCreationTimestamp="2026-01-28 17:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:17.700517105 +0000 UTC m=+1763.868305335" watchObservedRunningTime="2026-01-28 17:45:17.706664843 +0000 UTC m=+1763.874453073" Jan 28 17:45:17 crc kubenswrapper[5001]: I0128 17:45:17.728520 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.728498053 podStartE2EDuration="2.728498053s" podCreationTimestamp="2026-01-28 17:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:17.723242622 +0000 UTC m=+1763.891030862" watchObservedRunningTime="2026-01-28 17:45:17.728498053 +0000 UTC m=+1763.896286303" Jan 28 17:45:18 crc kubenswrapper[5001]: I0128 17:45:18.700637 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.167638 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.238842 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.634648 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.635918 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.639690 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.639869 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.645421 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.709111 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8v9\" (UniqueName: \"kubernetes.io/projected/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-kube-api-access-vc8v9\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.709207 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-config-data\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.709239 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-scripts\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.725347 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.726296 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.732052 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.739448 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.783274 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.784903 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.786663 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.798478 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817190 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb851a70-5fe6-4871-a27c-af537c7718f4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817306 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kghsd\" (UniqueName: \"kubernetes.io/projected/cb851a70-5fe6-4871-a27c-af537c7718f4-kube-api-access-kghsd\") pod \"nova-kuttl-scheduler-0\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817365 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f369fc2-1255-457d-a93b-76f823c3b10a-logs\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817405 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc8v9\" (UniqueName: \"kubernetes.io/projected/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-kube-api-access-vc8v9\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817431 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctmr\" (UniqueName: \"kubernetes.io/projected/0f369fc2-1255-457d-a93b-76f823c3b10a-kube-api-access-9ctmr\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817461 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-config-data\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817489 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-scripts\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.817523 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f369fc2-1255-457d-a93b-76f823c3b10a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.828064 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-config-data\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.837544 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-scripts\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.846812 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc8v9\" (UniqueName: \"kubernetes.io/projected/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-kube-api-access-vc8v9\") pod \"nova-kuttl-cell0-cell-mapping-8qxb5\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.918926 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kghsd\" (UniqueName: \"kubernetes.io/projected/cb851a70-5fe6-4871-a27c-af537c7718f4-kube-api-access-kghsd\") pod \"nova-kuttl-scheduler-0\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.919029 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f369fc2-1255-457d-a93b-76f823c3b10a-logs\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.919081 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctmr\" (UniqueName: \"kubernetes.io/projected/0f369fc2-1255-457d-a93b-76f823c3b10a-kube-api-access-9ctmr\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.919137 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f369fc2-1255-457d-a93b-76f823c3b10a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.919197 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb851a70-5fe6-4871-a27c-af537c7718f4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.919653 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f369fc2-1255-457d-a93b-76f823c3b10a-logs\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.929886 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb851a70-5fe6-4871-a27c-af537c7718f4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.933880 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.954548 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f369fc2-1255-457d-a93b-76f823c3b10a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.956410 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.957421 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctmr\" (UniqueName: \"kubernetes.io/projected/0f369fc2-1255-457d-a93b-76f823c3b10a-kube-api-access-9ctmr\") pod \"nova-kuttl-api-0\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.963518 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.973485 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.977905 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:21 crc kubenswrapper[5001]: I0128 17:45:21.982570 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kghsd\" (UniqueName: \"kubernetes.io/projected/cb851a70-5fe6-4871-a27c-af537c7718f4-kube-api-access-kghsd\") pod \"nova-kuttl-scheduler-0\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.023452 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/156ded69-8a78-443a-815e-03550c64af18-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.023764 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5x7\" (UniqueName: \"kubernetes.io/projected/156ded69-8a78-443a-815e-03550c64af18-kube-api-access-nx5x7\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.023861 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/156ded69-8a78-443a-815e-03550c64af18-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.042629 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.104581 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.126159 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/156ded69-8a78-443a-815e-03550c64af18-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.126285 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5x7\" (UniqueName: \"kubernetes.io/projected/156ded69-8a78-443a-815e-03550c64af18-kube-api-access-nx5x7\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.126322 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/156ded69-8a78-443a-815e-03550c64af18-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.128332 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/156ded69-8a78-443a-815e-03550c64af18-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.147303 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/156ded69-8a78-443a-815e-03550c64af18-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.158160 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5x7\" (UniqueName: \"kubernetes.io/projected/156ded69-8a78-443a-815e-03550c64af18-kube-api-access-nx5x7\") pod \"nova-kuttl-metadata-0\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.179966 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn"] Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.181306 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.191175 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.191424 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.191421 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn"] Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.201317 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x"] Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.205604 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.209284 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x"] Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.229904 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-scripts\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.229961 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-config-data\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.230587 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdd6\" (UniqueName: \"kubernetes.io/projected/dd4daf12-6f31-45e0-b586-2387cc93d41a-kube-api-access-lrdd6\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.230614 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-scripts\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.230650 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-config-data\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.230679 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmljp\" (UniqueName: \"kubernetes.io/projected/6315e7d2-32f1-4654-b00b-cffdf8cf9879-kube-api-access-mmljp\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.320288 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.332617 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-config-data\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.332673 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdd6\" (UniqueName: \"kubernetes.io/projected/dd4daf12-6f31-45e0-b586-2387cc93d41a-kube-api-access-lrdd6\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.332699 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-scripts\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.332732 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-config-data\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.332757 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmljp\" (UniqueName: \"kubernetes.io/projected/6315e7d2-32f1-4654-b00b-cffdf8cf9879-kube-api-access-mmljp\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.332819 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-scripts\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.340729 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-config-data\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.340817 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-scripts\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.340890 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-scripts\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.348301 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-config-data\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.362722 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmljp\" (UniqueName: \"kubernetes.io/projected/6315e7d2-32f1-4654-b00b-cffdf8cf9879-kube-api-access-mmljp\") pod \"nova-kuttl-cell1-cell-mapping-tczwn\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.362809 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdd6\" (UniqueName: \"kubernetes.io/projected/dd4daf12-6f31-45e0-b586-2387cc93d41a-kube-api-access-lrdd6\") pod \"nova-kuttl-cell1-host-discover-lv65x\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.488608 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5"] Jan 28 17:45:22 crc kubenswrapper[5001]: W0128 17:45:22.493881 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66bc527a_0ef8_4620_9f6d_5894c0b8f34d.slice/crio-c5868aed17b43c54438570b89b8c85a630be7b914dde63294417ed97b8b8366f WatchSource:0}: Error finding container c5868aed17b43c54438570b89b8c85a630be7b914dde63294417ed97b8b8366f: Status 404 returned error can't find the container with id c5868aed17b43c54438570b89b8c85a630be7b914dde63294417ed97b8b8366f Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.523111 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.552017 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.613444 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:22 crc kubenswrapper[5001]: W0128 17:45:22.641436 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb851a70_5fe6_4871_a27c_af537c7718f4.slice/crio-4c3da348b3810e1cab986232dd0f9be3ac6531aa2ac54c1b9c8ad69c08a17093 WatchSource:0}: Error finding container 4c3da348b3810e1cab986232dd0f9be3ac6531aa2ac54c1b9c8ad69c08a17093: Status 404 returned error can't find the container with id 4c3da348b3810e1cab986232dd0f9be3ac6531aa2ac54c1b9c8ad69c08a17093 Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.730704 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cb851a70-5fe6-4871-a27c-af537c7718f4","Type":"ContainerStarted","Data":"4c3da348b3810e1cab986232dd0f9be3ac6531aa2ac54c1b9c8ad69c08a17093"} Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.732259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" event={"ID":"66bc527a-0ef8-4620-9f6d-5894c0b8f34d","Type":"ContainerStarted","Data":"c5868aed17b43c54438570b89b8c85a630be7b914dde63294417ed97b8b8366f"} Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.735179 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:22 crc kubenswrapper[5001]: W0128 17:45:22.743837 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f369fc2_1255_457d_a93b_76f823c3b10a.slice/crio-456f2e13d3fd5cb07ace66d199b90de86bd11a56b4b6e8f674c6420853f7d2ad WatchSource:0}: Error finding container 456f2e13d3fd5cb07ace66d199b90de86bd11a56b4b6e8f674c6420853f7d2ad: Status 404 returned error can't find the container with id 456f2e13d3fd5cb07ace66d199b90de86bd11a56b4b6e8f674c6420853f7d2ad Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.833521 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:22 crc kubenswrapper[5001]: W0128 17:45:22.865531 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod156ded69_8a78_443a_815e_03550c64af18.slice/crio-a98220de7f906ca093ce812c75ed6684bd6065a4c435c9b2e5254b2125b6da9b WatchSource:0}: Error finding container a98220de7f906ca093ce812c75ed6684bd6065a4c435c9b2e5254b2125b6da9b: Status 404 returned error can't find the container with id a98220de7f906ca093ce812c75ed6684bd6065a4c435c9b2e5254b2125b6da9b Jan 28 17:45:22 crc kubenswrapper[5001]: I0128 17:45:22.999379 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn"] Jan 28 17:45:23 crc kubenswrapper[5001]: W0128 17:45:22.999945 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6315e7d2_32f1_4654_b00b_cffdf8cf9879.slice/crio-c207ac889e7dfc160d9b817f80ad17d4abd0b73bdb736518f7ad353d925f6f65 WatchSource:0}: Error finding container c207ac889e7dfc160d9b817f80ad17d4abd0b73bdb736518f7ad353d925f6f65: Status 404 returned error can't find the container with id c207ac889e7dfc160d9b817f80ad17d4abd0b73bdb736518f7ad353d925f6f65 Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.092610 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x"] Jan 28 17:45:23 crc kubenswrapper[5001]: W0128 17:45:23.099566 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd4daf12_6f31_45e0_b586_2387cc93d41a.slice/crio-2dd032d463dace2cddb172ab27d418b25eb147ba1bf645b7f0090ea86330944c WatchSource:0}: Error finding container 2dd032d463dace2cddb172ab27d418b25eb147ba1bf645b7f0090ea86330944c: Status 404 returned error can't find the container with id 2dd032d463dace2cddb172ab27d418b25eb147ba1bf645b7f0090ea86330944c Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.742259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cb851a70-5fe6-4871-a27c-af537c7718f4","Type":"ContainerStarted","Data":"e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.750602 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" event={"ID":"66bc527a-0ef8-4620-9f6d-5894c0b8f34d","Type":"ContainerStarted","Data":"a32f587123b9d75fa0381e0e0962056e83dd1a20f7ad2d3aca3e09231deaf98a"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.769497 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"156ded69-8a78-443a-815e-03550c64af18","Type":"ContainerStarted","Data":"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.769802 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"156ded69-8a78-443a-815e-03550c64af18","Type":"ContainerStarted","Data":"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.770517 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"156ded69-8a78-443a-815e-03550c64af18","Type":"ContainerStarted","Data":"a98220de7f906ca093ce812c75ed6684bd6065a4c435c9b2e5254b2125b6da9b"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.776794 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.776771067 podStartE2EDuration="2.776771067s" podCreationTimestamp="2026-01-28 17:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:23.761357142 +0000 UTC m=+1769.929145372" watchObservedRunningTime="2026-01-28 17:45:23.776771067 +0000 UTC m=+1769.944559297" Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.777179 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" event={"ID":"6315e7d2-32f1-4654-b00b-cffdf8cf9879","Type":"ContainerStarted","Data":"05ba5d980bb66d25cb7cb635480f535432f5bcd805a5e4979c7950fe4724c439"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.777216 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" event={"ID":"6315e7d2-32f1-4654-b00b-cffdf8cf9879","Type":"ContainerStarted","Data":"c207ac889e7dfc160d9b817f80ad17d4abd0b73bdb736518f7ad353d925f6f65"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.779617 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0f369fc2-1255-457d-a93b-76f823c3b10a","Type":"ContainerStarted","Data":"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.779653 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0f369fc2-1255-457d-a93b-76f823c3b10a","Type":"ContainerStarted","Data":"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.779699 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0f369fc2-1255-457d-a93b-76f823c3b10a","Type":"ContainerStarted","Data":"456f2e13d3fd5cb07ace66d199b90de86bd11a56b4b6e8f674c6420853f7d2ad"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.781587 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" event={"ID":"dd4daf12-6f31-45e0-b586-2387cc93d41a","Type":"ContainerStarted","Data":"9f5da6f91ee6c03f389522584e606e7efaeeeb481792d44aa9afb0a3240926b7"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.781616 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" event={"ID":"dd4daf12-6f31-45e0-b586-2387cc93d41a","Type":"ContainerStarted","Data":"2dd032d463dace2cddb172ab27d418b25eb147ba1bf645b7f0090ea86330944c"} Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.796581 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" podStartSLOduration=2.796563549 podStartE2EDuration="2.796563549s" podCreationTimestamp="2026-01-28 17:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:23.791031319 +0000 UTC m=+1769.958819579" watchObservedRunningTime="2026-01-28 17:45:23.796563549 +0000 UTC m=+1769.964351779" Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.804440 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" podStartSLOduration=1.804419286 podStartE2EDuration="1.804419286s" podCreationTimestamp="2026-01-28 17:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:23.8028358 +0000 UTC m=+1769.970624030" watchObservedRunningTime="2026-01-28 17:45:23.804419286 +0000 UTC m=+1769.972207516" Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.856613 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.855512381 podStartE2EDuration="2.855512381s" podCreationTimestamp="2026-01-28 17:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:23.850689982 +0000 UTC m=+1770.018478212" watchObservedRunningTime="2026-01-28 17:45:23.855512381 +0000 UTC m=+1770.023300611" Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.858005 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" podStartSLOduration=1.857987512 podStartE2EDuration="1.857987512s" podCreationTimestamp="2026-01-28 17:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:23.830337074 +0000 UTC m=+1769.998125304" watchObservedRunningTime="2026-01-28 17:45:23.857987512 +0000 UTC m=+1770.025775742" Jan 28 17:45:23 crc kubenswrapper[5001]: I0128 17:45:23.875123 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.875101387 podStartE2EDuration="2.875101387s" podCreationTimestamp="2026-01-28 17:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:23.869967038 +0000 UTC m=+1770.037755268" watchObservedRunningTime="2026-01-28 17:45:23.875101387 +0000 UTC m=+1770.042889617" Jan 28 17:45:25 crc kubenswrapper[5001]: I0128 17:45:25.594204 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:45:25 crc kubenswrapper[5001]: E0128 17:45:25.594730 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:45:26 crc kubenswrapper[5001]: I0128 17:45:26.818414 5001 generic.go:334] "Generic (PLEG): container finished" podID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerID="9f5da6f91ee6c03f389522584e606e7efaeeeb481792d44aa9afb0a3240926b7" exitCode=255 Jan 28 17:45:26 crc kubenswrapper[5001]: I0128 17:45:26.818578 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" event={"ID":"dd4daf12-6f31-45e0-b586-2387cc93d41a","Type":"ContainerDied","Data":"9f5da6f91ee6c03f389522584e606e7efaeeeb481792d44aa9afb0a3240926b7"} Jan 28 17:45:26 crc kubenswrapper[5001]: I0128 17:45:26.819125 5001 scope.go:117] "RemoveContainer" containerID="9f5da6f91ee6c03f389522584e606e7efaeeeb481792d44aa9afb0a3240926b7" Jan 28 17:45:27 crc kubenswrapper[5001]: I0128 17:45:27.043133 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:27 crc kubenswrapper[5001]: I0128 17:45:27.321524 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:27 crc kubenswrapper[5001]: I0128 17:45:27.321882 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:27 crc kubenswrapper[5001]: I0128 17:45:27.830751 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" event={"ID":"dd4daf12-6f31-45e0-b586-2387cc93d41a","Type":"ContainerStarted","Data":"9ad494391725c1b8be0c36afe61c3950489ddbbe3e0a2fc96dc0c5aaa639a59c"} Jan 28 17:45:27 crc kubenswrapper[5001]: I0128 17:45:27.833032 5001 generic.go:334] "Generic (PLEG): container finished" podID="66bc527a-0ef8-4620-9f6d-5894c0b8f34d" containerID="a32f587123b9d75fa0381e0e0962056e83dd1a20f7ad2d3aca3e09231deaf98a" exitCode=0 Jan 28 17:45:27 crc kubenswrapper[5001]: I0128 17:45:27.833073 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" event={"ID":"66bc527a-0ef8-4620-9f6d-5894c0b8f34d","Type":"ContainerDied","Data":"a32f587123b9d75fa0381e0e0962056e83dd1a20f7ad2d3aca3e09231deaf98a"} Jan 28 17:45:28 crc kubenswrapper[5001]: I0128 17:45:28.841709 5001 generic.go:334] "Generic (PLEG): container finished" podID="6315e7d2-32f1-4654-b00b-cffdf8cf9879" containerID="05ba5d980bb66d25cb7cb635480f535432f5bcd805a5e4979c7950fe4724c439" exitCode=0 Jan 28 17:45:28 crc kubenswrapper[5001]: I0128 17:45:28.841804 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" event={"ID":"6315e7d2-32f1-4654-b00b-cffdf8cf9879","Type":"ContainerDied","Data":"05ba5d980bb66d25cb7cb635480f535432f5bcd805a5e4979c7950fe4724c439"} Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.035689 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-rx4q2"] Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.046244 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-rx4q2"] Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.163900 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.257968 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-scripts\") pod \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.258024 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-config-data\") pod \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.258127 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc8v9\" (UniqueName: \"kubernetes.io/projected/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-kube-api-access-vc8v9\") pod \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\" (UID: \"66bc527a-0ef8-4620-9f6d-5894c0b8f34d\") " Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.263708 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-kube-api-access-vc8v9" (OuterVolumeSpecName: "kube-api-access-vc8v9") pod "66bc527a-0ef8-4620-9f6d-5894c0b8f34d" (UID: "66bc527a-0ef8-4620-9f6d-5894c0b8f34d"). InnerVolumeSpecName "kube-api-access-vc8v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.275344 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-scripts" (OuterVolumeSpecName: "scripts") pod "66bc527a-0ef8-4620-9f6d-5894c0b8f34d" (UID: "66bc527a-0ef8-4620-9f6d-5894c0b8f34d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.278521 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-config-data" (OuterVolumeSpecName: "config-data") pod "66bc527a-0ef8-4620-9f6d-5894c0b8f34d" (UID: "66bc527a-0ef8-4620-9f6d-5894c0b8f34d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.360437 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc8v9\" (UniqueName: \"kubernetes.io/projected/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-kube-api-access-vc8v9\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.360480 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.360493 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bc527a-0ef8-4620-9f6d-5894c0b8f34d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.853693 5001 generic.go:334] "Generic (PLEG): container finished" podID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerID="9ad494391725c1b8be0c36afe61c3950489ddbbe3e0a2fc96dc0c5aaa639a59c" exitCode=0 Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.853769 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" event={"ID":"dd4daf12-6f31-45e0-b586-2387cc93d41a","Type":"ContainerDied","Data":"9ad494391725c1b8be0c36afe61c3950489ddbbe3e0a2fc96dc0c5aaa639a59c"} Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.854775 5001 scope.go:117] "RemoveContainer" containerID="9f5da6f91ee6c03f389522584e606e7efaeeeb481792d44aa9afb0a3240926b7" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.857252 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.858208 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5" event={"ID":"66bc527a-0ef8-4620-9f6d-5894c0b8f34d","Type":"ContainerDied","Data":"c5868aed17b43c54438570b89b8c85a630be7b914dde63294417ed97b8b8366f"} Jan 28 17:45:29 crc kubenswrapper[5001]: I0128 17:45:29.858268 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5868aed17b43c54438570b89b8c85a630be7b914dde63294417ed97b8b8366f" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.081576 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.081784 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-log" containerID="cri-o://836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d" gracePeriod=30 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.081920 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-api" containerID="cri-o://3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86" gracePeriod=30 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.112534 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.112798 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="cb851a70-5fe6-4871-a27c-af537c7718f4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3" gracePeriod=30 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.223493 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.223796 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-log" containerID="cri-o://249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca" gracePeriod=30 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.224040 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea" gracePeriod=30 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.242421 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.276155 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-scripts\") pod \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.276211 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-config-data\") pod \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.276314 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmljp\" (UniqueName: \"kubernetes.io/projected/6315e7d2-32f1-4654-b00b-cffdf8cf9879-kube-api-access-mmljp\") pod \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\" (UID: \"6315e7d2-32f1-4654-b00b-cffdf8cf9879\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.283055 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6315e7d2-32f1-4654-b00b-cffdf8cf9879-kube-api-access-mmljp" (OuterVolumeSpecName: "kube-api-access-mmljp") pod "6315e7d2-32f1-4654-b00b-cffdf8cf9879" (UID: "6315e7d2-32f1-4654-b00b-cffdf8cf9879"). InnerVolumeSpecName "kube-api-access-mmljp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.292150 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-scripts" (OuterVolumeSpecName: "scripts") pod "6315e7d2-32f1-4654-b00b-cffdf8cf9879" (UID: "6315e7d2-32f1-4654-b00b-cffdf8cf9879"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.302303 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-config-data" (OuterVolumeSpecName: "config-data") pod "6315e7d2-32f1-4654-b00b-cffdf8cf9879" (UID: "6315e7d2-32f1-4654-b00b-cffdf8cf9879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.378441 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmljp\" (UniqueName: \"kubernetes.io/projected/6315e7d2-32f1-4654-b00b-cffdf8cf9879-kube-api-access-mmljp\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.378904 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.378920 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6315e7d2-32f1-4654-b00b-cffdf8cf9879-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.592090 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.602967 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1b7798-ae74-4b6b-ade7-f282fa5e3253" path="/var/lib/kubelet/pods/3b1b7798-ae74-4b6b-ade7-f282fa5e3253/volumes" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.682767 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f369fc2-1255-457d-a93b-76f823c3b10a-logs\") pod \"0f369fc2-1255-457d-a93b-76f823c3b10a\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.682849 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ctmr\" (UniqueName: \"kubernetes.io/projected/0f369fc2-1255-457d-a93b-76f823c3b10a-kube-api-access-9ctmr\") pod \"0f369fc2-1255-457d-a93b-76f823c3b10a\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.682900 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f369fc2-1255-457d-a93b-76f823c3b10a-config-data\") pod \"0f369fc2-1255-457d-a93b-76f823c3b10a\" (UID: \"0f369fc2-1255-457d-a93b-76f823c3b10a\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.684564 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f369fc2-1255-457d-a93b-76f823c3b10a-logs" (OuterVolumeSpecName: "logs") pod "0f369fc2-1255-457d-a93b-76f823c3b10a" (UID: "0f369fc2-1255-457d-a93b-76f823c3b10a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.686667 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f369fc2-1255-457d-a93b-76f823c3b10a-kube-api-access-9ctmr" (OuterVolumeSpecName: "kube-api-access-9ctmr") pod "0f369fc2-1255-457d-a93b-76f823c3b10a" (UID: "0f369fc2-1255-457d-a93b-76f823c3b10a"). InnerVolumeSpecName "kube-api-access-9ctmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.703535 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f369fc2-1255-457d-a93b-76f823c3b10a-config-data" (OuterVolumeSpecName: "config-data") pod "0f369fc2-1255-457d-a93b-76f823c3b10a" (UID: "0f369fc2-1255-457d-a93b-76f823c3b10a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.737408 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.783875 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/156ded69-8a78-443a-815e-03550c64af18-logs\") pod \"156ded69-8a78-443a-815e-03550c64af18\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.783960 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx5x7\" (UniqueName: \"kubernetes.io/projected/156ded69-8a78-443a-815e-03550c64af18-kube-api-access-nx5x7\") pod \"156ded69-8a78-443a-815e-03550c64af18\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.784099 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/156ded69-8a78-443a-815e-03550c64af18-config-data\") pod \"156ded69-8a78-443a-815e-03550c64af18\" (UID: \"156ded69-8a78-443a-815e-03550c64af18\") " Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.784370 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f369fc2-1255-457d-a93b-76f823c3b10a-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.784384 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ctmr\" (UniqueName: \"kubernetes.io/projected/0f369fc2-1255-457d-a93b-76f823c3b10a-kube-api-access-9ctmr\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.784409 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f369fc2-1255-457d-a93b-76f823c3b10a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.784583 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156ded69-8a78-443a-815e-03550c64af18-logs" (OuterVolumeSpecName: "logs") pod "156ded69-8a78-443a-815e-03550c64af18" (UID: "156ded69-8a78-443a-815e-03550c64af18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.787300 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156ded69-8a78-443a-815e-03550c64af18-kube-api-access-nx5x7" (OuterVolumeSpecName: "kube-api-access-nx5x7") pod "156ded69-8a78-443a-815e-03550c64af18" (UID: "156ded69-8a78-443a-815e-03550c64af18"). InnerVolumeSpecName "kube-api-access-nx5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.803203 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156ded69-8a78-443a-815e-03550c64af18-config-data" (OuterVolumeSpecName: "config-data") pod "156ded69-8a78-443a-815e-03550c64af18" (UID: "156ded69-8a78-443a-815e-03550c64af18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.868709 5001 generic.go:334] "Generic (PLEG): container finished" podID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerID="3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86" exitCode=0 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.868757 5001 generic.go:334] "Generic (PLEG): container finished" podID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerID="836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d" exitCode=143 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.868822 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0f369fc2-1255-457d-a93b-76f823c3b10a","Type":"ContainerDied","Data":"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.868840 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.868865 5001 scope.go:117] "RemoveContainer" containerID="3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.868853 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0f369fc2-1255-457d-a93b-76f823c3b10a","Type":"ContainerDied","Data":"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.869017 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0f369fc2-1255-457d-a93b-76f823c3b10a","Type":"ContainerDied","Data":"456f2e13d3fd5cb07ace66d199b90de86bd11a56b4b6e8f674c6420853f7d2ad"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.883644 5001 generic.go:334] "Generic (PLEG): container finished" podID="156ded69-8a78-443a-815e-03550c64af18" containerID="e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea" exitCode=0 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.883677 5001 generic.go:334] "Generic (PLEG): container finished" podID="156ded69-8a78-443a-815e-03550c64af18" containerID="249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca" exitCode=143 Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.883740 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"156ded69-8a78-443a-815e-03550c64af18","Type":"ContainerDied","Data":"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.883754 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.883779 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"156ded69-8a78-443a-815e-03550c64af18","Type":"ContainerDied","Data":"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.883792 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"156ded69-8a78-443a-815e-03550c64af18","Type":"ContainerDied","Data":"a98220de7f906ca093ce812c75ed6684bd6065a4c435c9b2e5254b2125b6da9b"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.890115 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx5x7\" (UniqueName: \"kubernetes.io/projected/156ded69-8a78-443a-815e-03550c64af18-kube-api-access-nx5x7\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.890433 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/156ded69-8a78-443a-815e-03550c64af18-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.890534 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/156ded69-8a78-443a-815e-03550c64af18-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.907221 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" event={"ID":"6315e7d2-32f1-4654-b00b-cffdf8cf9879","Type":"ContainerDied","Data":"c207ac889e7dfc160d9b817f80ad17d4abd0b73bdb736518f7ad353d925f6f65"} Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.907260 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c207ac889e7dfc160d9b817f80ad17d4abd0b73bdb736518f7ad353d925f6f65" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.907293 5001 scope.go:117] "RemoveContainer" containerID="836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.907402 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.932804 5001 scope.go:117] "RemoveContainer" containerID="3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86" Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.940716 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86\": container with ID starting with 3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86 not found: ID does not exist" containerID="3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.940796 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86"} err="failed to get container status \"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86\": rpc error: code = NotFound desc = could not find container \"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86\": container with ID starting with 3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86 not found: ID does not exist" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.940830 5001 scope.go:117] "RemoveContainer" containerID="836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.943352 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.945446 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d\": container with ID starting with 836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d not found: ID does not exist" containerID="836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.945493 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d"} err="failed to get container status \"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d\": rpc error: code = NotFound desc = could not find container \"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d\": container with ID starting with 836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d not found: ID does not exist" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.945524 5001 scope.go:117] "RemoveContainer" containerID="3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.945809 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86"} err="failed to get container status \"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86\": rpc error: code = NotFound desc = could not find container \"3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86\": container with ID starting with 3cb269027525f3c6500b07861092c29bfb46d8da1d2d9a1ffb2365edc3fc9b86 not found: ID does not exist" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.945828 5001 scope.go:117] "RemoveContainer" containerID="836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.946279 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d"} err="failed to get container status \"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d\": rpc error: code = NotFound desc = could not find container \"836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d\": container with ID starting with 836fe88f2a9f55c30848775eb06096d77073a15fc80165731d649706cd9e2a7d not found: ID does not exist" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.946320 5001 scope.go:117] "RemoveContainer" containerID="e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.960518 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991145 5001 scope.go:117] "RemoveContainer" containerID="249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991298 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.991624 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-metadata" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991677 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-metadata" Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.991687 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-log" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991694 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-log" Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.991716 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-log" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991722 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-log" Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.991734 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66bc527a-0ef8-4620-9f6d-5894c0b8f34d" containerName="nova-manage" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991739 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="66bc527a-0ef8-4620-9f6d-5894c0b8f34d" containerName="nova-manage" Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.991748 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-api" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991754 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-api" Jan 28 17:45:30 crc kubenswrapper[5001]: E0128 17:45:30.991767 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6315e7d2-32f1-4654-b00b-cffdf8cf9879" containerName="nova-manage" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991772 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="6315e7d2-32f1-4654-b00b-cffdf8cf9879" containerName="nova-manage" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991913 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="6315e7d2-32f1-4654-b00b-cffdf8cf9879" containerName="nova-manage" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991921 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-api" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991930 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="66bc527a-0ef8-4620-9f6d-5894c0b8f34d" containerName="nova-manage" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991942 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" containerName="nova-kuttl-api-log" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991949 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-log" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.991960 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="156ded69-8a78-443a-815e-03550c64af18" containerName="nova-kuttl-metadata-metadata" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.993296 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:30 crc kubenswrapper[5001]: I0128 17:45:30.998229 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.002449 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.026476 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.036131 5001 scope.go:117] "RemoveContainer" containerID="e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea" Jan 28 17:45:31 crc kubenswrapper[5001]: E0128 17:45:31.037145 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea\": container with ID starting with e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea not found: ID does not exist" containerID="e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.037178 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea"} err="failed to get container status \"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea\": rpc error: code = NotFound desc = could not find container \"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea\": container with ID starting with e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea not found: ID does not exist" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.037205 5001 scope.go:117] "RemoveContainer" containerID="249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca" Jan 28 17:45:31 crc kubenswrapper[5001]: E0128 17:45:31.037590 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca\": container with ID starting with 249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca not found: ID does not exist" containerID="249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.037642 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca"} err="failed to get container status \"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca\": rpc error: code = NotFound desc = could not find container \"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca\": container with ID starting with 249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca not found: ID does not exist" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.037678 5001 scope.go:117] "RemoveContainer" containerID="e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.037725 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.038006 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea"} err="failed to get container status \"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea\": rpc error: code = NotFound desc = could not find container \"e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea\": container with ID starting with e3931bf926393a31d5d9a30cd04f70574b09f1011c75430449f3a2cccefed7ea not found: ID does not exist" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.038033 5001 scope.go:117] "RemoveContainer" containerID="249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.038280 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca"} err="failed to get container status \"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca\": rpc error: code = NotFound desc = could not find container \"249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca\": container with ID starting with 249e053832484436d8729384f1a47e1bd2dea0ccc70467085a9dd824698cc0ca not found: ID does not exist" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.046670 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.048314 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.051507 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.052940 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.100569 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlwb\" (UniqueName: \"kubernetes.io/projected/533b536f-acd5-4142-a16f-9727a4e00d26-kube-api-access-ddlwb\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.100638 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533b536f-acd5-4142-a16f-9727a4e00d26-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.100671 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4495db60-316a-4907-a347-c998d146fbe4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.100757 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zm5\" (UniqueName: \"kubernetes.io/projected/4495db60-316a-4907-a347-c998d146fbe4-kube-api-access-d2zm5\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.100797 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533b536f-acd5-4142-a16f-9727a4e00d26-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.100832 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4495db60-316a-4907-a347-c998d146fbe4-logs\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.130622 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: E0128 17:45:31.131232 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data kube-api-access-d2zm5 logs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="4495db60-316a-4907-a347-c998d146fbe4" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.161424 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:31 crc kubenswrapper[5001]: E0128 17:45:31.162068 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data kube-api-access-ddlwb logs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="533b536f-acd5-4142-a16f-9727a4e00d26" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.205305 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533b536f-acd5-4142-a16f-9727a4e00d26-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.205455 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4495db60-316a-4907-a347-c998d146fbe4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.205508 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zm5\" (UniqueName: \"kubernetes.io/projected/4495db60-316a-4907-a347-c998d146fbe4-kube-api-access-d2zm5\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.205541 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533b536f-acd5-4142-a16f-9727a4e00d26-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.205569 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4495db60-316a-4907-a347-c998d146fbe4-logs\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.205614 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddlwb\" (UniqueName: \"kubernetes.io/projected/533b536f-acd5-4142-a16f-9727a4e00d26-kube-api-access-ddlwb\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.206394 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533b536f-acd5-4142-a16f-9727a4e00d26-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.207298 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4495db60-316a-4907-a347-c998d146fbe4-logs\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.211738 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533b536f-acd5-4142-a16f-9727a4e00d26-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.211797 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4495db60-316a-4907-a347-c998d146fbe4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.223811 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddlwb\" (UniqueName: \"kubernetes.io/projected/533b536f-acd5-4142-a16f-9727a4e00d26-kube-api-access-ddlwb\") pod \"nova-kuttl-metadata-0\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.224051 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zm5\" (UniqueName: \"kubernetes.io/projected/4495db60-316a-4907-a347-c998d146fbe4-kube-api-access-d2zm5\") pod \"nova-kuttl-api-0\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.319467 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.408153 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-config-data\") pod \"dd4daf12-6f31-45e0-b586-2387cc93d41a\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.408271 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-scripts\") pod \"dd4daf12-6f31-45e0-b586-2387cc93d41a\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.408404 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdd6\" (UniqueName: \"kubernetes.io/projected/dd4daf12-6f31-45e0-b586-2387cc93d41a-kube-api-access-lrdd6\") pod \"dd4daf12-6f31-45e0-b586-2387cc93d41a\" (UID: \"dd4daf12-6f31-45e0-b586-2387cc93d41a\") " Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.412917 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-scripts" (OuterVolumeSpecName: "scripts") pod "dd4daf12-6f31-45e0-b586-2387cc93d41a" (UID: "dd4daf12-6f31-45e0-b586-2387cc93d41a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.413081 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4daf12-6f31-45e0-b586-2387cc93d41a-kube-api-access-lrdd6" (OuterVolumeSpecName: "kube-api-access-lrdd6") pod "dd4daf12-6f31-45e0-b586-2387cc93d41a" (UID: "dd4daf12-6f31-45e0-b586-2387cc93d41a"). InnerVolumeSpecName "kube-api-access-lrdd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.426714 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-config-data" (OuterVolumeSpecName: "config-data") pod "dd4daf12-6f31-45e0-b586-2387cc93d41a" (UID: "dd4daf12-6f31-45e0-b586-2387cc93d41a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.510563 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.510611 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdd6\" (UniqueName: \"kubernetes.io/projected/dd4daf12-6f31-45e0-b586-2387cc93d41a-kube-api-access-lrdd6\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.510631 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4daf12-6f31-45e0-b586-2387cc93d41a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.928986 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" event={"ID":"dd4daf12-6f31-45e0-b586-2387cc93d41a","Type":"ContainerDied","Data":"2dd032d463dace2cddb172ab27d418b25eb147ba1bf645b7f0090ea86330944c"} Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.929321 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd032d463dace2cddb172ab27d418b25eb147ba1bf645b7f0090ea86330944c" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.929064 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.929031 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.929162 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.939474 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:31 crc kubenswrapper[5001]: I0128 17:45:31.946830 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018069 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4495db60-316a-4907-a347-c998d146fbe4-config-data\") pod \"4495db60-316a-4907-a347-c998d146fbe4\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018147 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlwb\" (UniqueName: \"kubernetes.io/projected/533b536f-acd5-4142-a16f-9727a4e00d26-kube-api-access-ddlwb\") pod \"533b536f-acd5-4142-a16f-9727a4e00d26\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018219 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2zm5\" (UniqueName: \"kubernetes.io/projected/4495db60-316a-4907-a347-c998d146fbe4-kube-api-access-d2zm5\") pod \"4495db60-316a-4907-a347-c998d146fbe4\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018250 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533b536f-acd5-4142-a16f-9727a4e00d26-logs\") pod \"533b536f-acd5-4142-a16f-9727a4e00d26\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018292 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533b536f-acd5-4142-a16f-9727a4e00d26-config-data\") pod \"533b536f-acd5-4142-a16f-9727a4e00d26\" (UID: \"533b536f-acd5-4142-a16f-9727a4e00d26\") " Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018347 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4495db60-316a-4907-a347-c998d146fbe4-logs\") pod \"4495db60-316a-4907-a347-c998d146fbe4\" (UID: \"4495db60-316a-4907-a347-c998d146fbe4\") " Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.018946 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4495db60-316a-4907-a347-c998d146fbe4-logs" (OuterVolumeSpecName: "logs") pod "4495db60-316a-4907-a347-c998d146fbe4" (UID: "4495db60-316a-4907-a347-c998d146fbe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.019967 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533b536f-acd5-4142-a16f-9727a4e00d26-logs" (OuterVolumeSpecName: "logs") pod "533b536f-acd5-4142-a16f-9727a4e00d26" (UID: "533b536f-acd5-4142-a16f-9727a4e00d26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.034615 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4495db60-316a-4907-a347-c998d146fbe4-config-data" (OuterVolumeSpecName: "config-data") pod "4495db60-316a-4907-a347-c998d146fbe4" (UID: "4495db60-316a-4907-a347-c998d146fbe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.034639 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533b536f-acd5-4142-a16f-9727a4e00d26-config-data" (OuterVolumeSpecName: "config-data") pod "533b536f-acd5-4142-a16f-9727a4e00d26" (UID: "533b536f-acd5-4142-a16f-9727a4e00d26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.034682 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533b536f-acd5-4142-a16f-9727a4e00d26-kube-api-access-ddlwb" (OuterVolumeSpecName: "kube-api-access-ddlwb") pod "533b536f-acd5-4142-a16f-9727a4e00d26" (UID: "533b536f-acd5-4142-a16f-9727a4e00d26"). InnerVolumeSpecName "kube-api-access-ddlwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.034726 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4495db60-316a-4907-a347-c998d146fbe4-kube-api-access-d2zm5" (OuterVolumeSpecName: "kube-api-access-d2zm5") pod "4495db60-316a-4907-a347-c998d146fbe4" (UID: "4495db60-316a-4907-a347-c998d146fbe4"). InnerVolumeSpecName "kube-api-access-d2zm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.120188 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4495db60-316a-4907-a347-c998d146fbe4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.120231 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddlwb\" (UniqueName: \"kubernetes.io/projected/533b536f-acd5-4142-a16f-9727a4e00d26-kube-api-access-ddlwb\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.120246 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2zm5\" (UniqueName: \"kubernetes.io/projected/4495db60-316a-4907-a347-c998d146fbe4-kube-api-access-d2zm5\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.120263 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533b536f-acd5-4142-a16f-9727a4e00d26-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.120275 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533b536f-acd5-4142-a16f-9727a4e00d26-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.120285 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4495db60-316a-4907-a347-c998d146fbe4-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.605651 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f369fc2-1255-457d-a93b-76f823c3b10a" path="/var/lib/kubelet/pods/0f369fc2-1255-457d-a93b-76f823c3b10a/volumes" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.606249 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156ded69-8a78-443a-815e-03550c64af18" path="/var/lib/kubelet/pods/156ded69-8a78-443a-815e-03550c64af18/volumes" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.936727 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.936771 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:32 crc kubenswrapper[5001]: I0128 17:45:32.996338 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.040889 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.060415 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: E0128 17:45:33.060855 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerName="nova-manage" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.060877 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerName="nova-manage" Jan 28 17:45:33 crc kubenswrapper[5001]: E0128 17:45:33.060896 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerName="nova-manage" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.060905 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerName="nova-manage" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.061072 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerName="nova-manage" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.061089 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" containerName="nova-manage" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.061901 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.069451 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.075867 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.110781 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.127907 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.135312 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.136437 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6505969-c260-48eb-a881-726eed24ac22-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.136587 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qpw\" (UniqueName: \"kubernetes.io/projected/a6505969-c260-48eb-a881-726eed24ac22-kube-api-access-97qpw\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.136710 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6505969-c260-48eb-a881-726eed24ac22-logs\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.136868 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.138755 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.144015 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238037 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756717c8-aaaf-4961-bff8-1dbcebc3405b-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238087 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbgw\" (UniqueName: \"kubernetes.io/projected/756717c8-aaaf-4961-bff8-1dbcebc3405b-kube-api-access-qrbgw\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238126 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qpw\" (UniqueName: \"kubernetes.io/projected/a6505969-c260-48eb-a881-726eed24ac22-kube-api-access-97qpw\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238218 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6505969-c260-48eb-a881-726eed24ac22-logs\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238281 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6505969-c260-48eb-a881-726eed24ac22-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238338 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756717c8-aaaf-4961-bff8-1dbcebc3405b-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.238743 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6505969-c260-48eb-a881-726eed24ac22-logs\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.242672 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6505969-c260-48eb-a881-726eed24ac22-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.264718 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qpw\" (UniqueName: \"kubernetes.io/projected/a6505969-c260-48eb-a881-726eed24ac22-kube-api-access-97qpw\") pod \"nova-kuttl-api-0\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.339400 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756717c8-aaaf-4961-bff8-1dbcebc3405b-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.339453 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbgw\" (UniqueName: \"kubernetes.io/projected/756717c8-aaaf-4961-bff8-1dbcebc3405b-kube-api-access-qrbgw\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.339537 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756717c8-aaaf-4961-bff8-1dbcebc3405b-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.340248 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756717c8-aaaf-4961-bff8-1dbcebc3405b-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.345157 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756717c8-aaaf-4961-bff8-1dbcebc3405b-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.359258 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbgw\" (UniqueName: \"kubernetes.io/projected/756717c8-aaaf-4961-bff8-1dbcebc3405b-kube-api-access-qrbgw\") pod \"nova-kuttl-metadata-0\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.384153 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.451445 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.807409 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.891342 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:45:33 crc kubenswrapper[5001]: W0128 17:45:33.900104 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod756717c8_aaaf_4961_bff8_1dbcebc3405b.slice/crio-726065172f51bfe73eace220e2346f25b77da65e6931f3bc6ab5a08f7db49240 WatchSource:0}: Error finding container 726065172f51bfe73eace220e2346f25b77da65e6931f3bc6ab5a08f7db49240: Status 404 returned error can't find the container with id 726065172f51bfe73eace220e2346f25b77da65e6931f3bc6ab5a08f7db49240 Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.948873 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"756717c8-aaaf-4961-bff8-1dbcebc3405b","Type":"ContainerStarted","Data":"726065172f51bfe73eace220e2346f25b77da65e6931f3bc6ab5a08f7db49240"} Jan 28 17:45:33 crc kubenswrapper[5001]: I0128 17:45:33.950779 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a6505969-c260-48eb-a881-726eed24ac22","Type":"ContainerStarted","Data":"59e7b94b314c06a584264f284cbfd3ecbedf2074561014fcef90f59957072031"} Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.536107 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.565937 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb851a70-5fe6-4871-a27c-af537c7718f4-config-data\") pod \"cb851a70-5fe6-4871-a27c-af537c7718f4\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.566063 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kghsd\" (UniqueName: \"kubernetes.io/projected/cb851a70-5fe6-4871-a27c-af537c7718f4-kube-api-access-kghsd\") pod \"cb851a70-5fe6-4871-a27c-af537c7718f4\" (UID: \"cb851a70-5fe6-4871-a27c-af537c7718f4\") " Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.577625 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb851a70-5fe6-4871-a27c-af537c7718f4-kube-api-access-kghsd" (OuterVolumeSpecName: "kube-api-access-kghsd") pod "cb851a70-5fe6-4871-a27c-af537c7718f4" (UID: "cb851a70-5fe6-4871-a27c-af537c7718f4"). InnerVolumeSpecName "kube-api-access-kghsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.599853 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb851a70-5fe6-4871-a27c-af537c7718f4-config-data" (OuterVolumeSpecName: "config-data") pod "cb851a70-5fe6-4871-a27c-af537c7718f4" (UID: "cb851a70-5fe6-4871-a27c-af537c7718f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.613437 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4495db60-316a-4907-a347-c998d146fbe4" path="/var/lib/kubelet/pods/4495db60-316a-4907-a347-c998d146fbe4/volumes" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.613993 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533b536f-acd5-4142-a16f-9727a4e00d26" path="/var/lib/kubelet/pods/533b536f-acd5-4142-a16f-9727a4e00d26/volumes" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.668007 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kghsd\" (UniqueName: \"kubernetes.io/projected/cb851a70-5fe6-4871-a27c-af537c7718f4-kube-api-access-kghsd\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.668042 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb851a70-5fe6-4871-a27c-af537c7718f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.985638 5001 generic.go:334] "Generic (PLEG): container finished" podID="cb851a70-5fe6-4871-a27c-af537c7718f4" containerID="e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3" exitCode=0 Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.985711 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cb851a70-5fe6-4871-a27c-af537c7718f4","Type":"ContainerDied","Data":"e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3"} Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.985741 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cb851a70-5fe6-4871-a27c-af537c7718f4","Type":"ContainerDied","Data":"4c3da348b3810e1cab986232dd0f9be3ac6531aa2ac54c1b9c8ad69c08a17093"} Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.985757 5001 scope.go:117] "RemoveContainer" containerID="e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.985875 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.990674 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"756717c8-aaaf-4961-bff8-1dbcebc3405b","Type":"ContainerStarted","Data":"f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd"} Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.990718 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"756717c8-aaaf-4961-bff8-1dbcebc3405b","Type":"ContainerStarted","Data":"3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd"} Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.993106 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a6505969-c260-48eb-a881-726eed24ac22","Type":"ContainerStarted","Data":"eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa"} Jan 28 17:45:34 crc kubenswrapper[5001]: I0128 17:45:34.993133 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a6505969-c260-48eb-a881-726eed24ac22","Type":"ContainerStarted","Data":"4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f"} Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.020011 5001 scope.go:117] "RemoveContainer" containerID="e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3" Jan 28 17:45:35 crc kubenswrapper[5001]: E0128 17:45:35.022889 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3\": container with ID starting with e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3 not found: ID does not exist" containerID="e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.022939 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3"} err="failed to get container status \"e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3\": rpc error: code = NotFound desc = could not find container \"e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3\": container with ID starting with e9b7353429a0778ba2f4c0cdd6e54afb0dd9999635f518c57a3259f521e3aeb3 not found: ID does not exist" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.025561 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.033172 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.049692 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:35 crc kubenswrapper[5001]: E0128 17:45:35.050201 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb851a70-5fe6-4871-a27c-af537c7718f4" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.050233 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb851a70-5fe6-4871-a27c-af537c7718f4" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.050459 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb851a70-5fe6-4871-a27c-af537c7718f4" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.051193 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.053690 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.055870 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.055841912 podStartE2EDuration="2.055841912s" podCreationTimestamp="2026-01-28 17:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:35.022638283 +0000 UTC m=+1781.190426533" watchObservedRunningTime="2026-01-28 17:45:35.055841912 +0000 UTC m=+1781.223630152" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.080550 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.081133 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f10951-a7b3-4f68-8b03-94b291a9b088-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.081233 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwhx\" (UniqueName: \"kubernetes.io/projected/32f10951-a7b3-4f68-8b03-94b291a9b088-kube-api-access-jmwhx\") pod \"nova-kuttl-scheduler-0\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.086230 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.086212679 podStartE2EDuration="3.086212679s" podCreationTimestamp="2026-01-28 17:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:35.05023652 +0000 UTC m=+1781.218024760" watchObservedRunningTime="2026-01-28 17:45:35.086212679 +0000 UTC m=+1781.254000929" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.182485 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f10951-a7b3-4f68-8b03-94b291a9b088-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.182770 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwhx\" (UniqueName: \"kubernetes.io/projected/32f10951-a7b3-4f68-8b03-94b291a9b088-kube-api-access-jmwhx\") pod \"nova-kuttl-scheduler-0\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.186268 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f10951-a7b3-4f68-8b03-94b291a9b088-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.198826 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwhx\" (UniqueName: \"kubernetes.io/projected/32f10951-a7b3-4f68-8b03-94b291a9b088-kube-api-access-jmwhx\") pod \"nova-kuttl-scheduler-0\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.382635 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:35 crc kubenswrapper[5001]: I0128 17:45:35.793479 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:45:35 crc kubenswrapper[5001]: W0128 17:45:35.803291 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32f10951_a7b3_4f68_8b03_94b291a9b088.slice/crio-3fb76f91009cded1a4e714a9a5c7d2ec4c9ac0a3480aeb77f834ecd4b621413e WatchSource:0}: Error finding container 3fb76f91009cded1a4e714a9a5c7d2ec4c9ac0a3480aeb77f834ecd4b621413e: Status 404 returned error can't find the container with id 3fb76f91009cded1a4e714a9a5c7d2ec4c9ac0a3480aeb77f834ecd4b621413e Jan 28 17:45:36 crc kubenswrapper[5001]: I0128 17:45:36.014232 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32f10951-a7b3-4f68-8b03-94b291a9b088","Type":"ContainerStarted","Data":"44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f"} Jan 28 17:45:36 crc kubenswrapper[5001]: I0128 17:45:36.014271 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32f10951-a7b3-4f68-8b03-94b291a9b088","Type":"ContainerStarted","Data":"3fb76f91009cded1a4e714a9a5c7d2ec4c9ac0a3480aeb77f834ecd4b621413e"} Jan 28 17:45:36 crc kubenswrapper[5001]: I0128 17:45:36.032027 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.032008479 podStartE2EDuration="1.032008479s" podCreationTimestamp="2026-01-28 17:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:45:36.027740856 +0000 UTC m=+1782.195529086" watchObservedRunningTime="2026-01-28 17:45:36.032008479 +0000 UTC m=+1782.199796709" Jan 28 17:45:36 crc kubenswrapper[5001]: I0128 17:45:36.604468 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb851a70-5fe6-4871-a27c-af537c7718f4" path="/var/lib/kubelet/pods/cb851a70-5fe6-4871-a27c-af537c7718f4/volumes" Jan 28 17:45:38 crc kubenswrapper[5001]: I0128 17:45:38.451558 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:38 crc kubenswrapper[5001]: I0128 17:45:38.451989 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:38 crc kubenswrapper[5001]: I0128 17:45:38.594031 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:45:38 crc kubenswrapper[5001]: E0128 17:45:38.594333 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:45:40 crc kubenswrapper[5001]: I0128 17:45:40.038493 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-wkjqk"] Jan 28 17:45:40 crc kubenswrapper[5001]: I0128 17:45:40.049051 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-wkjqk"] Jan 28 17:45:40 crc kubenswrapper[5001]: I0128 17:45:40.383554 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:40 crc kubenswrapper[5001]: I0128 17:45:40.606626 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5929507d-7d57-4274-bd5f-f784c279d763" path="/var/lib/kubelet/pods/5929507d-7d57-4274-bd5f-f784c279d763/volumes" Jan 28 17:45:42 crc kubenswrapper[5001]: I0128 17:45:42.026873 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gw8xf"] Jan 28 17:45:42 crc kubenswrapper[5001]: I0128 17:45:42.034802 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gw8xf"] Jan 28 17:45:42 crc kubenswrapper[5001]: I0128 17:45:42.604258 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c4ec90-8fe2-48ff-8ab8-478abb02b03d" path="/var/lib/kubelet/pods/b7c4ec90-8fe2-48ff-8ab8-478abb02b03d/volumes" Jan 28 17:45:43 crc kubenswrapper[5001]: I0128 17:45:43.384829 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:43 crc kubenswrapper[5001]: I0128 17:45:43.384902 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:43 crc kubenswrapper[5001]: I0128 17:45:43.452573 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:43 crc kubenswrapper[5001]: I0128 17:45:43.452639 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:44 crc kubenswrapper[5001]: I0128 17:45:44.469183 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:45:44 crc kubenswrapper[5001]: I0128 17:45:44.469466 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:45:44 crc kubenswrapper[5001]: I0128 17:45:44.552302 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.202:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:45:44 crc kubenswrapper[5001]: I0128 17:45:44.552739 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.202:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:45:45 crc kubenswrapper[5001]: I0128 17:45:45.382842 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:45 crc kubenswrapper[5001]: I0128 17:45:45.412085 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:46 crc kubenswrapper[5001]: I0128 17:45:46.115874 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:45:50 crc kubenswrapper[5001]: I0128 17:45:50.594269 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:45:50 crc kubenswrapper[5001]: E0128 17:45:50.594899 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.390480 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.390892 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.391490 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.391531 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.398300 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.398673 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.454789 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.458140 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:53 crc kubenswrapper[5001]: I0128 17:45:53.458519 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:54 crc kubenswrapper[5001]: I0128 17:45:54.160303 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:45:59 crc kubenswrapper[5001]: I0128 17:45:59.352310 5001 scope.go:117] "RemoveContainer" containerID="a0f2cfaf86049378ef611c0b9be88ad6ad7ca696b97d61b543b936b956350897" Jan 28 17:45:59 crc kubenswrapper[5001]: I0128 17:45:59.384743 5001 scope.go:117] "RemoveContainer" containerID="a5444bfa04b6403bee32f437e0f37005900a5fc5f2cc65f33fef4d91898918f0" Jan 28 17:45:59 crc kubenswrapper[5001]: I0128 17:45:59.435889 5001 scope.go:117] "RemoveContainer" containerID="9fc87329a3f9d577f57d6de4eedac3d1915bfc3024d8b7681e1550c0bd07e776" Jan 28 17:45:59 crc kubenswrapper[5001]: I0128 17:45:59.470225 5001 scope.go:117] "RemoveContainer" containerID="cc45c85c7ec56e3e961c5863e4927f84487a54081e79d3d4bd9a9cb441e6a1e1" Jan 28 17:46:04 crc kubenswrapper[5001]: I0128 17:46:04.601200 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:46:04 crc kubenswrapper[5001]: E0128 17:46:04.602095 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.319495 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.319991 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="7c463cbc-3189-47fd-8b1d-33c332bebcb3" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65" gracePeriod=30 Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.367334 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.367536 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="53256273-450e-4371-bd25-a1e8c96f2d77" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" gracePeriod=30 Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.396518 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.396739 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="32f10951-a7b3-4f68-8b03-94b291a9b088" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" gracePeriod=30 Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.443752 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.444030 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-log" containerID="cri-o://eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa" gracePeriod=30 Jan 28 17:46:07 crc kubenswrapper[5001]: I0128 17:46:07.444127 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-api" containerID="cri-o://4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f" gracePeriod=30 Jan 28 17:46:08 crc kubenswrapper[5001]: E0128 17:46:08.249718 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:08 crc kubenswrapper[5001]: E0128 17:46:08.252195 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:08 crc kubenswrapper[5001]: E0128 17:46:08.253355 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:08 crc kubenswrapper[5001]: E0128 17:46:08.253399 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="53256273-450e-4371-bd25-a1e8c96f2d77" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:08 crc kubenswrapper[5001]: I0128 17:46:08.273297 5001 generic.go:334] "Generic (PLEG): container finished" podID="a6505969-c260-48eb-a881-726eed24ac22" containerID="eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa" exitCode=143 Jan 28 17:46:08 crc kubenswrapper[5001]: I0128 17:46:08.273348 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a6505969-c260-48eb-a881-726eed24ac22","Type":"ContainerDied","Data":"eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa"} Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.200280 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.289275 5001 generic.go:334] "Generic (PLEG): container finished" podID="7c463cbc-3189-47fd-8b1d-33c332bebcb3" containerID="4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65" exitCode=0 Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.289332 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7c463cbc-3189-47fd-8b1d-33c332bebcb3","Type":"ContainerDied","Data":"4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65"} Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.289382 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7c463cbc-3189-47fd-8b1d-33c332bebcb3","Type":"ContainerDied","Data":"c3d148abaa915dc8e4ba979497878f5ea648d0b49a38f22850cba75c4fc14ee7"} Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.289403 5001 scope.go:117] "RemoveContainer" containerID="4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.289338 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.311197 5001 scope.go:117] "RemoveContainer" containerID="4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65" Jan 28 17:46:10 crc kubenswrapper[5001]: E0128 17:46:10.311601 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65\": container with ID starting with 4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65 not found: ID does not exist" containerID="4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.311645 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65"} err="failed to get container status \"4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65\": rpc error: code = NotFound desc = could not find container \"4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65\": container with ID starting with 4c83b57a3e304c2894e24481b04a8d5b7855423da7f34072d34e9ae5495b6f65 not found: ID does not exist" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.323964 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c463cbc-3189-47fd-8b1d-33c332bebcb3-config-data\") pod \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.324024 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4ppl\" (UniqueName: \"kubernetes.io/projected/7c463cbc-3189-47fd-8b1d-33c332bebcb3-kube-api-access-s4ppl\") pod \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\" (UID: \"7c463cbc-3189-47fd-8b1d-33c332bebcb3\") " Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.330692 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c463cbc-3189-47fd-8b1d-33c332bebcb3-kube-api-access-s4ppl" (OuterVolumeSpecName: "kube-api-access-s4ppl") pod "7c463cbc-3189-47fd-8b1d-33c332bebcb3" (UID: "7c463cbc-3189-47fd-8b1d-33c332bebcb3"). InnerVolumeSpecName "kube-api-access-s4ppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.346766 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c463cbc-3189-47fd-8b1d-33c332bebcb3-config-data" (OuterVolumeSpecName: "config-data") pod "7c463cbc-3189-47fd-8b1d-33c332bebcb3" (UID: "7c463cbc-3189-47fd-8b1d-33c332bebcb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:10 crc kubenswrapper[5001]: E0128 17:46:10.385543 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:46:10 crc kubenswrapper[5001]: E0128 17:46:10.386991 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:46:10 crc kubenswrapper[5001]: E0128 17:46:10.389065 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:46:10 crc kubenswrapper[5001]: E0128 17:46:10.389101 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="32f10951-a7b3-4f68-8b03-94b291a9b088" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.426566 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c463cbc-3189-47fd-8b1d-33c332bebcb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.426623 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4ppl\" (UniqueName: \"kubernetes.io/projected/7c463cbc-3189-47fd-8b1d-33c332bebcb3-kube-api-access-s4ppl\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.621311 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.630209 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.659696 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:10 crc kubenswrapper[5001]: E0128 17:46:10.660164 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c463cbc-3189-47fd-8b1d-33c332bebcb3" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.660180 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c463cbc-3189-47fd-8b1d-33c332bebcb3" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.660379 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c463cbc-3189-47fd-8b1d-33c332bebcb3" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.661115 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.666190 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.678983 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.679230 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="14bfd635-f134-4e2a-9921-5efdd3f205fc" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936" gracePeriod=30 Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.710727 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.731796 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6bbn\" (UniqueName: \"kubernetes.io/projected/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-kube-api-access-p6bbn\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.731849 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.833306 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6bbn\" (UniqueName: \"kubernetes.io/projected/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-kube-api-access-p6bbn\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.833354 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.838226 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.852107 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6bbn\" (UniqueName: \"kubernetes.io/projected/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-kube-api-access-p6bbn\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:10 crc kubenswrapper[5001]: I0128 17:46:10.979685 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.075901 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.218751 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.220487 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.226314 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.226359 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="14bfd635-f134-4e2a-9921-5efdd3f205fc" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.250603 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6505969-c260-48eb-a881-726eed24ac22-config-data\") pod \"a6505969-c260-48eb-a881-726eed24ac22\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.250720 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qpw\" (UniqueName: \"kubernetes.io/projected/a6505969-c260-48eb-a881-726eed24ac22-kube-api-access-97qpw\") pod \"a6505969-c260-48eb-a881-726eed24ac22\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.250803 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6505969-c260-48eb-a881-726eed24ac22-logs\") pod \"a6505969-c260-48eb-a881-726eed24ac22\" (UID: \"a6505969-c260-48eb-a881-726eed24ac22\") " Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.251378 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6505969-c260-48eb-a881-726eed24ac22-logs" (OuterVolumeSpecName: "logs") pod "a6505969-c260-48eb-a881-726eed24ac22" (UID: "a6505969-c260-48eb-a881-726eed24ac22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.253923 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6505969-c260-48eb-a881-726eed24ac22-kube-api-access-97qpw" (OuterVolumeSpecName: "kube-api-access-97qpw") pod "a6505969-c260-48eb-a881-726eed24ac22" (UID: "a6505969-c260-48eb-a881-726eed24ac22"). InnerVolumeSpecName "kube-api-access-97qpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.278003 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6505969-c260-48eb-a881-726eed24ac22-config-data" (OuterVolumeSpecName: "config-data") pod "a6505969-c260-48eb-a881-726eed24ac22" (UID: "a6505969-c260-48eb-a881-726eed24ac22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.297989 5001 generic.go:334] "Generic (PLEG): container finished" podID="a6505969-c260-48eb-a881-726eed24ac22" containerID="4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f" exitCode=0 Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.298033 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.298033 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a6505969-c260-48eb-a881-726eed24ac22","Type":"ContainerDied","Data":"4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f"} Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.298159 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a6505969-c260-48eb-a881-726eed24ac22","Type":"ContainerDied","Data":"59e7b94b314c06a584264f284cbfd3ecbedf2074561014fcef90f59957072031"} Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.298215 5001 scope.go:117] "RemoveContainer" containerID="4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.318334 5001 scope.go:117] "RemoveContainer" containerID="eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.333101 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.343760 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.343996 5001 scope.go:117] "RemoveContainer" containerID="4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f" Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.344465 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f\": container with ID starting with 4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f not found: ID does not exist" containerID="4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.344495 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f"} err="failed to get container status \"4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f\": rpc error: code = NotFound desc = could not find container \"4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f\": container with ID starting with 4e1735949c326fbcc463a2c792bd1454e53587d4d4cde81468825cbdef32c00f not found: ID does not exist" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.344543 5001 scope.go:117] "RemoveContainer" containerID="eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa" Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.344789 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa\": container with ID starting with eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa not found: ID does not exist" containerID="eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.344814 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa"} err="failed to get container status \"eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa\": rpc error: code = NotFound desc = could not find container \"eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa\": container with ID starting with eb6f772c603e4bf2b949856191b8cdc25f5dcfc0b1cc9558cf84e6cdd09076fa not found: ID does not exist" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.354729 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6505969-c260-48eb-a881-726eed24ac22-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.354803 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qpw\" (UniqueName: \"kubernetes.io/projected/a6505969-c260-48eb-a881-726eed24ac22-kube-api-access-97qpw\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.354819 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6505969-c260-48eb-a881-726eed24ac22-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.368222 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.368675 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-api" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.368694 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-api" Jan 28 17:46:11 crc kubenswrapper[5001]: E0128 17:46:11.368715 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-log" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.368725 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-log" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.369175 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-api" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.369217 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6505969-c260-48eb-a881-726eed24ac22" containerName="nova-kuttl-api-log" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.370800 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.373823 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.381355 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.427994 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.456015 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-logs\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.456069 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-config-data\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.456104 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlwd\" (UniqueName: \"kubernetes.io/projected/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-kube-api-access-nhlwd\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.558443 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-logs\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.558737 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-config-data\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.558763 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhlwd\" (UniqueName: \"kubernetes.io/projected/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-kube-api-access-nhlwd\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.558869 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-logs\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.564087 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-config-data\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.576451 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhlwd\" (UniqueName: \"kubernetes.io/projected/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-kube-api-access-nhlwd\") pod \"nova-kuttl-api-0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.683893 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:11 crc kubenswrapper[5001]: I0128 17:46:11.949530 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.360565 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"2c6eeeaf-d3e0-44af-a035-17e0a315fc53","Type":"ContainerStarted","Data":"b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d"} Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.360604 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"2c6eeeaf-d3e0-44af-a035-17e0a315fc53","Type":"ContainerStarted","Data":"6a93a27fd9be38e3b22ff59d2283f2d7fbec4b1adbbdefa7dd98ffb71a723445"} Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.360724 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.378662 5001 generic.go:334] "Generic (PLEG): container finished" podID="14bfd635-f134-4e2a-9921-5efdd3f205fc" containerID="bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936" exitCode=0 Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.378734 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"14bfd635-f134-4e2a-9921-5efdd3f205fc","Type":"ContainerDied","Data":"bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936"} Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.380968 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"152386d6-cdfe-4d11-91b1-bc85d02bdcc0","Type":"ContainerStarted","Data":"6442506b4c6992f131b739b6d686a892cd229b4d595807ac9dc5cf41fac6238e"} Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.443944 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.443918137 podStartE2EDuration="2.443918137s" podCreationTimestamp="2026-01-28 17:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:12.4190686 +0000 UTC m=+1818.586856830" watchObservedRunningTime="2026-01-28 17:46:12.443918137 +0000 UTC m=+1818.611706367" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.465617 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.574989 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bfd635-f134-4e2a-9921-5efdd3f205fc-config-data\") pod \"14bfd635-f134-4e2a-9921-5efdd3f205fc\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.575184 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltk6j\" (UniqueName: \"kubernetes.io/projected/14bfd635-f134-4e2a-9921-5efdd3f205fc-kube-api-access-ltk6j\") pod \"14bfd635-f134-4e2a-9921-5efdd3f205fc\" (UID: \"14bfd635-f134-4e2a-9921-5efdd3f205fc\") " Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.583051 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14bfd635-f134-4e2a-9921-5efdd3f205fc-kube-api-access-ltk6j" (OuterVolumeSpecName: "kube-api-access-ltk6j") pod "14bfd635-f134-4e2a-9921-5efdd3f205fc" (UID: "14bfd635-f134-4e2a-9921-5efdd3f205fc"). InnerVolumeSpecName "kube-api-access-ltk6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.603988 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14bfd635-f134-4e2a-9921-5efdd3f205fc-config-data" (OuterVolumeSpecName: "config-data") pod "14bfd635-f134-4e2a-9921-5efdd3f205fc" (UID: "14bfd635-f134-4e2a-9921-5efdd3f205fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.609284 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c463cbc-3189-47fd-8b1d-33c332bebcb3" path="/var/lib/kubelet/pods/7c463cbc-3189-47fd-8b1d-33c332bebcb3/volumes" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.609954 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6505969-c260-48eb-a881-726eed24ac22" path="/var/lib/kubelet/pods/a6505969-c260-48eb-a881-726eed24ac22/volumes" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.677140 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltk6j\" (UniqueName: \"kubernetes.io/projected/14bfd635-f134-4e2a-9921-5efdd3f205fc-kube-api-access-ltk6j\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:12 crc kubenswrapper[5001]: I0128 17:46:12.677172 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bfd635-f134-4e2a-9921-5efdd3f205fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.217242 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.386949 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53256273-450e-4371-bd25-a1e8c96f2d77-config-data\") pod \"53256273-450e-4371-bd25-a1e8c96f2d77\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.387613 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m94ss\" (UniqueName: \"kubernetes.io/projected/53256273-450e-4371-bd25-a1e8c96f2d77-kube-api-access-m94ss\") pod \"53256273-450e-4371-bd25-a1e8c96f2d77\" (UID: \"53256273-450e-4371-bd25-a1e8c96f2d77\") " Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.390568 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53256273-450e-4371-bd25-a1e8c96f2d77-kube-api-access-m94ss" (OuterVolumeSpecName: "kube-api-access-m94ss") pod "53256273-450e-4371-bd25-a1e8c96f2d77" (UID: "53256273-450e-4371-bd25-a1e8c96f2d77"). InnerVolumeSpecName "kube-api-access-m94ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.421485 5001 generic.go:334] "Generic (PLEG): container finished" podID="53256273-450e-4371-bd25-a1e8c96f2d77" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" exitCode=0 Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.421587 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"53256273-450e-4371-bd25-a1e8c96f2d77","Type":"ContainerDied","Data":"721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca"} Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.421614 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"53256273-450e-4371-bd25-a1e8c96f2d77","Type":"ContainerDied","Data":"f3e804465c86fff179ad3453dc229629804bed01f372803126312f02d92b7358"} Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.421637 5001 scope.go:117] "RemoveContainer" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.421630 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.427708 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53256273-450e-4371-bd25-a1e8c96f2d77-config-data" (OuterVolumeSpecName: "config-data") pod "53256273-450e-4371-bd25-a1e8c96f2d77" (UID: "53256273-450e-4371-bd25-a1e8c96f2d77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.436393 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"14bfd635-f134-4e2a-9921-5efdd3f205fc","Type":"ContainerDied","Data":"9645dc3f1006b4f3596989e60c2bb95e49d637eb4127cc5c52a0ca68388bb201"} Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.436505 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.444071 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"152386d6-cdfe-4d11-91b1-bc85d02bdcc0","Type":"ContainerStarted","Data":"7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe"} Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.444140 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"152386d6-cdfe-4d11-91b1-bc85d02bdcc0","Type":"ContainerStarted","Data":"73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98"} Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.465963 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.465943471 podStartE2EDuration="2.465943471s" podCreationTimestamp="2026-01-28 17:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:13.460542555 +0000 UTC m=+1819.628330785" watchObservedRunningTime="2026-01-28 17:46:13.465943471 +0000 UTC m=+1819.633731701" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.489180 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53256273-450e-4371-bd25-a1e8c96f2d77-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.489214 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m94ss\" (UniqueName: \"kubernetes.io/projected/53256273-450e-4371-bd25-a1e8c96f2d77-kube-api-access-m94ss\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.491838 5001 scope.go:117] "RemoveContainer" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" Jan 28 17:46:13 crc kubenswrapper[5001]: E0128 17:46:13.492404 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca\": container with ID starting with 721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca not found: ID does not exist" containerID="721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.492455 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca"} err="failed to get container status \"721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca\": rpc error: code = NotFound desc = could not find container \"721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca\": container with ID starting with 721f04d569c128e2e9198ec15a5ff4839ec5ec54189196d43663df8878b19bca not found: ID does not exist" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.492492 5001 scope.go:117] "RemoveContainer" containerID="bf237a427d2d198fbb2756b2824327c1e9e523afa76ded8b8ff591e4025d2936" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.499424 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.550662 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.561626 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: E0128 17:46:13.562088 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53256273-450e-4371-bd25-a1e8c96f2d77" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.562116 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="53256273-450e-4371-bd25-a1e8c96f2d77" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:13 crc kubenswrapper[5001]: E0128 17:46:13.562159 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14bfd635-f134-4e2a-9921-5efdd3f205fc" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.562166 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="14bfd635-f134-4e2a-9921-5efdd3f205fc" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.562363 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="14bfd635-f134-4e2a-9921-5efdd3f205fc" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.562385 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="53256273-450e-4371-bd25-a1e8c96f2d77" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.563225 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.566011 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.572253 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.705033 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzg9\" (UniqueName: \"kubernetes.io/projected/86936800-e6b8-42cd-ac73-b9213072b89e-kube-api-access-wnzg9\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.705074 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86936800-e6b8-42cd-ac73-b9213072b89e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.760026 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.774363 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.785798 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.786938 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.789297 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.799641 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.807409 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnzg9\" (UniqueName: \"kubernetes.io/projected/86936800-e6b8-42cd-ac73-b9213072b89e-kube-api-access-wnzg9\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.807458 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86936800-e6b8-42cd-ac73-b9213072b89e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.811085 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86936800-e6b8-42cd-ac73-b9213072b89e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.825522 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnzg9\" (UniqueName: \"kubernetes.io/projected/86936800-e6b8-42cd-ac73-b9213072b89e-kube-api-access-wnzg9\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.883337 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.910095 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxfhf\" (UniqueName: \"kubernetes.io/projected/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-kube-api-access-hxfhf\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:13 crc kubenswrapper[5001]: I0128 17:46:13.910186 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.013584 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxfhf\" (UniqueName: \"kubernetes.io/projected/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-kube-api-access-hxfhf\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.014086 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.029660 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.037120 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxfhf\" (UniqueName: \"kubernetes.io/projected/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-kube-api-access-hxfhf\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.109106 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.333803 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.415799 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.473460 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"86936800-e6b8-42cd-ac73-b9213072b89e","Type":"ContainerStarted","Data":"d824221b4a0749c659ae18fae74ae4963ca020f894b391bf07e446db29dc6563"} Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.481504 5001 generic.go:334] "Generic (PLEG): container finished" podID="32f10951-a7b3-4f68-8b03-94b291a9b088" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" exitCode=0 Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.482377 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.482851 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32f10951-a7b3-4f68-8b03-94b291a9b088","Type":"ContainerDied","Data":"44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f"} Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.482887 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32f10951-a7b3-4f68-8b03-94b291a9b088","Type":"ContainerDied","Data":"3fb76f91009cded1a4e714a9a5c7d2ec4c9ac0a3480aeb77f834ecd4b621413e"} Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.482919 5001 scope.go:117] "RemoveContainer" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.504032 5001 scope.go:117] "RemoveContainer" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" Jan 28 17:46:14 crc kubenswrapper[5001]: E0128 17:46:14.505621 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f\": container with ID starting with 44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f not found: ID does not exist" containerID="44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.505694 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f"} err="failed to get container status \"44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f\": rpc error: code = NotFound desc = could not find container \"44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f\": container with ID starting with 44ee67e1093d6cf1b973719f7fe28c35b495da56c124549e3e9604ae779c6f1f not found: ID does not exist" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.521120 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f10951-a7b3-4f68-8b03-94b291a9b088-config-data\") pod \"32f10951-a7b3-4f68-8b03-94b291a9b088\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.521870 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwhx\" (UniqueName: \"kubernetes.io/projected/32f10951-a7b3-4f68-8b03-94b291a9b088-kube-api-access-jmwhx\") pod \"32f10951-a7b3-4f68-8b03-94b291a9b088\" (UID: \"32f10951-a7b3-4f68-8b03-94b291a9b088\") " Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.525188 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f10951-a7b3-4f68-8b03-94b291a9b088-kube-api-access-jmwhx" (OuterVolumeSpecName: "kube-api-access-jmwhx") pod "32f10951-a7b3-4f68-8b03-94b291a9b088" (UID: "32f10951-a7b3-4f68-8b03-94b291a9b088"). InnerVolumeSpecName "kube-api-access-jmwhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.552541 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f10951-a7b3-4f68-8b03-94b291a9b088-config-data" (OuterVolumeSpecName: "config-data") pod "32f10951-a7b3-4f68-8b03-94b291a9b088" (UID: "32f10951-a7b3-4f68-8b03-94b291a9b088"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.623932 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f10951-a7b3-4f68-8b03-94b291a9b088-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.623986 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwhx\" (UniqueName: \"kubernetes.io/projected/32f10951-a7b3-4f68-8b03-94b291a9b088-kube-api-access-jmwhx\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.632791 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14bfd635-f134-4e2a-9921-5efdd3f205fc" path="/var/lib/kubelet/pods/14bfd635-f134-4e2a-9921-5efdd3f205fc/volumes" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.633496 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53256273-450e-4371-bd25-a1e8c96f2d77" path="/var/lib/kubelet/pods/53256273-450e-4371-bd25-a1e8c96f2d77/volumes" Jan 28 17:46:14 crc kubenswrapper[5001]: W0128 17:46:14.645007 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d7efcb6_bbc3_4a64_83ec_66b36aea0fed.slice/crio-5c5100ca8c408cdc1ae25b8a44cb3ee76e7441c53e977faacaef53391516cd37 WatchSource:0}: Error finding container 5c5100ca8c408cdc1ae25b8a44cb3ee76e7441c53e977faacaef53391516cd37: Status 404 returned error can't find the container with id 5c5100ca8c408cdc1ae25b8a44cb3ee76e7441c53e977faacaef53391516cd37 Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.656367 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.808442 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.816741 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.830588 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:14 crc kubenswrapper[5001]: E0128 17:46:14.831049 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f10951-a7b3-4f68-8b03-94b291a9b088" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.831071 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f10951-a7b3-4f68-8b03-94b291a9b088" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.831284 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f10951-a7b3-4f68-8b03-94b291a9b088" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.831931 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.834259 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.841397 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.928602 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:14 crc kubenswrapper[5001]: I0128 17:46:14.929065 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4l2\" (UniqueName: \"kubernetes.io/projected/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-kube-api-access-4v4l2\") pod \"nova-kuttl-scheduler-0\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.031065 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.031144 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4l2\" (UniqueName: \"kubernetes.io/projected/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-kube-api-access-4v4l2\") pod \"nova-kuttl-scheduler-0\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.050203 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.051558 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4l2\" (UniqueName: \"kubernetes.io/projected/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-kube-api-access-4v4l2\") pod \"nova-kuttl-scheduler-0\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.155911 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.493345 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerStarted","Data":"a41cdd408c6671b0723ee023e5af22221f9ebce7037cd380b16149bdd408bc9b"} Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.493933 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.493956 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerStarted","Data":"5c5100ca8c408cdc1ae25b8a44cb3ee76e7441c53e977faacaef53391516cd37"} Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.495158 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"86936800-e6b8-42cd-ac73-b9213072b89e","Type":"ContainerStarted","Data":"4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a"} Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.495323 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.518911 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.518888547 podStartE2EDuration="2.518888547s" podCreationTimestamp="2026-01-28 17:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:15.514265674 +0000 UTC m=+1821.682053904" watchObservedRunningTime="2026-01-28 17:46:15.518888547 +0000 UTC m=+1821.686676777" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.533712 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.534505 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.534479417 podStartE2EDuration="2.534479417s" podCreationTimestamp="2026-01-28 17:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:15.52866824 +0000 UTC m=+1821.696456460" watchObservedRunningTime="2026-01-28 17:46:15.534479417 +0000 UTC m=+1821.702267657" Jan 28 17:46:15 crc kubenswrapper[5001]: W0128 17:46:15.612281 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42a5d1c3_1661_46d5_8bca_3a91dbe1d816.slice/crio-34c611eaf5a9a27c576371e2269cef87e6ca3a0f33dc464a0615d87181c2509f WatchSource:0}: Error finding container 34c611eaf5a9a27c576371e2269cef87e6ca3a0f33dc464a0615d87181c2509f: Status 404 returned error can't find the container with id 34c611eaf5a9a27c576371e2269cef87e6ca3a0f33dc464a0615d87181c2509f Jan 28 17:46:15 crc kubenswrapper[5001]: I0128 17:46:15.625791 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:16 crc kubenswrapper[5001]: I0128 17:46:16.504682 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"42a5d1c3-1661-46d5-8bca-3a91dbe1d816","Type":"ContainerStarted","Data":"df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442"} Jan 28 17:46:16 crc kubenswrapper[5001]: I0128 17:46:16.505011 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"42a5d1c3-1661-46d5-8bca-3a91dbe1d816","Type":"ContainerStarted","Data":"34c611eaf5a9a27c576371e2269cef87e6ca3a0f33dc464a0615d87181c2509f"} Jan 28 17:46:16 crc kubenswrapper[5001]: I0128 17:46:16.625769 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f10951-a7b3-4f68-8b03-94b291a9b088" path="/var/lib/kubelet/pods/32f10951-a7b3-4f68-8b03-94b291a9b088/volumes" Jan 28 17:46:18 crc kubenswrapper[5001]: I0128 17:46:18.518558 5001 generic.go:334] "Generic (PLEG): container finished" podID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerID="a41cdd408c6671b0723ee023e5af22221f9ebce7037cd380b16149bdd408bc9b" exitCode=0 Jan 28 17:46:18 crc kubenswrapper[5001]: I0128 17:46:18.518591 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerDied","Data":"a41cdd408c6671b0723ee023e5af22221f9ebce7037cd380b16149bdd408bc9b"} Jan 28 17:46:18 crc kubenswrapper[5001]: I0128 17:46:18.519542 5001 scope.go:117] "RemoveContainer" containerID="a41cdd408c6671b0723ee023e5af22221f9ebce7037cd380b16149bdd408bc9b" Jan 28 17:46:18 crc kubenswrapper[5001]: I0128 17:46:18.542056 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=4.542034023 podStartE2EDuration="4.542034023s" podCreationTimestamp="2026-01-28 17:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:16.518924707 +0000 UTC m=+1822.686712937" watchObservedRunningTime="2026-01-28 17:46:18.542034023 +0000 UTC m=+1824.709822263" Jan 28 17:46:18 crc kubenswrapper[5001]: I0128 17:46:18.594577 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:46:18 crc kubenswrapper[5001]: E0128 17:46:18.594834 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:46:19 crc kubenswrapper[5001]: I0128 17:46:19.110540 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:19 crc kubenswrapper[5001]: I0128 17:46:19.528587 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerStarted","Data":"190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3"} Jan 28 17:46:19 crc kubenswrapper[5001]: I0128 17:46:19.528828 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:19 crc kubenswrapper[5001]: I0128 17:46:19.563891 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:20 crc kubenswrapper[5001]: I0128 17:46:20.156443 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:21 crc kubenswrapper[5001]: I0128 17:46:21.021356 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:21 crc kubenswrapper[5001]: I0128 17:46:21.684137 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:21 crc kubenswrapper[5001]: I0128 17:46:21.684210 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:22 crc kubenswrapper[5001]: I0128 17:46:22.748259 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:46:22 crc kubenswrapper[5001]: I0128 17:46:22.790169 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:46:23 crc kubenswrapper[5001]: I0128 17:46:23.563320 5001 generic.go:334] "Generic (PLEG): container finished" podID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerID="190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3" exitCode=0 Jan 28 17:46:23 crc kubenswrapper[5001]: I0128 17:46:23.563360 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerDied","Data":"190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3"} Jan 28 17:46:23 crc kubenswrapper[5001]: I0128 17:46:23.563392 5001 scope.go:117] "RemoveContainer" containerID="a41cdd408c6671b0723ee023e5af22221f9ebce7037cd380b16149bdd408bc9b" Jan 28 17:46:23 crc kubenswrapper[5001]: I0128 17:46:23.570601 5001 scope.go:117] "RemoveContainer" containerID="190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3" Jan 28 17:46:23 crc kubenswrapper[5001]: E0128 17:46:23.571471 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(7d7efcb6-bbc3-4a64-83ec-66b36aea0fed)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" Jan 28 17:46:23 crc kubenswrapper[5001]: I0128 17:46:23.915569 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:24 crc kubenswrapper[5001]: I0128 17:46:24.111018 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:24 crc kubenswrapper[5001]: I0128 17:46:24.111083 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:24 crc kubenswrapper[5001]: I0128 17:46:24.576340 5001 scope.go:117] "RemoveContainer" containerID="190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3" Jan 28 17:46:24 crc kubenswrapper[5001]: E0128 17:46:24.576610 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(7d7efcb6-bbc3-4a64-83ec-66b36aea0fed)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" Jan 28 17:46:25 crc kubenswrapper[5001]: I0128 17:46:25.156632 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:25 crc kubenswrapper[5001]: I0128 17:46:25.182601 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:25 crc kubenswrapper[5001]: I0128 17:46:25.605253 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.593836 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:46:31 crc kubenswrapper[5001]: E0128 17:46:31.595137 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.687624 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.687712 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.688172 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.688373 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.691695 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:31 crc kubenswrapper[5001]: I0128 17:46:31.692764 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:35 crc kubenswrapper[5001]: I0128 17:46:35.593411 5001 scope.go:117] "RemoveContainer" containerID="190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3" Jan 28 17:46:36 crc kubenswrapper[5001]: I0128 17:46:36.669283 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerStarted","Data":"bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d"} Jan 28 17:46:36 crc kubenswrapper[5001]: I0128 17:46:36.670024 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:36 crc kubenswrapper[5001]: I0128 17:46:36.695104 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.143102 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.152163 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-8qxb5"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.160268 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.168698 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-tczwn"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.176790 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.189179 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-lv65x"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.197606 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.255926 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.256136 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="1e5490c0-8d10-4966-85b4-e3045a16a80d" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.298785 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell12af7-account-delete-hwpv2"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.300127 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.306968 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell12af7-account-delete-hwpv2"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.356229 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapieb14-account-delete-mx92g"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.360757 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapieb14-account-delete-mx92g"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.360862 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.421535 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/951a4226-b9e2-4b48-86cf-efaa562b986e-operator-scripts\") pod \"novaapieb14-account-delete-mx92g\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.421595 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2b8\" (UniqueName: \"kubernetes.io/projected/5024ec83-ca88-4a33-bc3a-c75444c3da94-kube-api-access-qf2b8\") pod \"novacell12af7-account-delete-hwpv2\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.421631 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdjzw\" (UniqueName: \"kubernetes.io/projected/951a4226-b9e2-4b48-86cf-efaa562b986e-kube-api-access-gdjzw\") pod \"novaapieb14-account-delete-mx92g\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.421849 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024ec83-ca88-4a33-bc3a-c75444c3da94-operator-scripts\") pod \"novacell12af7-account-delete-hwpv2\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.470151 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.470407 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-log" containerID="cri-o://73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.470716 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-api" containerID="cri-o://7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.498624 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0459a-account-delete-gh72t"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.499633 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.520903 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.521125 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="86936800-e6b8-42cd-ac73-b9213072b89e" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.523154 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024ec83-ca88-4a33-bc3a-c75444c3da94-operator-scripts\") pod \"novacell12af7-account-delete-hwpv2\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.523228 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/951a4226-b9e2-4b48-86cf-efaa562b986e-operator-scripts\") pod \"novaapieb14-account-delete-mx92g\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.523261 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf2b8\" (UniqueName: \"kubernetes.io/projected/5024ec83-ca88-4a33-bc3a-c75444c3da94-kube-api-access-qf2b8\") pod \"novacell12af7-account-delete-hwpv2\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.523295 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdjzw\" (UniqueName: \"kubernetes.io/projected/951a4226-b9e2-4b48-86cf-efaa562b986e-kube-api-access-gdjzw\") pod \"novaapieb14-account-delete-mx92g\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.524234 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024ec83-ca88-4a33-bc3a-c75444c3da94-operator-scripts\") pod \"novacell12af7-account-delete-hwpv2\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.555057 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/951a4226-b9e2-4b48-86cf-efaa562b986e-operator-scripts\") pod \"novaapieb14-account-delete-mx92g\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.585126 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.585431 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-log" containerID="cri-o://3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.585932 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.600797 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0459a-account-delete-gh72t"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.610666 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdjzw\" (UniqueName: \"kubernetes.io/projected/951a4226-b9e2-4b48-86cf-efaa562b986e-kube-api-access-gdjzw\") pod \"novaapieb14-account-delete-mx92g\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.610718 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf2b8\" (UniqueName: \"kubernetes.io/projected/5024ec83-ca88-4a33-bc3a-c75444c3da94-kube-api-access-qf2b8\") pod \"novacell12af7-account-delete-hwpv2\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.624824 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.625347 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmp7\" (UniqueName: \"kubernetes.io/projected/39f81527-6bd1-4210-81d9-c4274447b5bd-kube-api-access-jtmp7\") pod \"novacell0459a-account-delete-gh72t\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.625432 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39f81527-6bd1-4210-81d9-c4274447b5bd-operator-scripts\") pod \"novacell0459a-account-delete-gh72t\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.638238 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.653145 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-wf6nk"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.667279 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.667486 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.680330 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.682486 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.684162 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" gracePeriod=30 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.700513 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.707522 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-slhns"] Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.710718 5001 generic.go:334] "Generic (PLEG): container finished" podID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerID="73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98" exitCode=143 Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.711398 5001 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-jcvhv\" not found" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.711947 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"152386d6-cdfe-4d11-91b1-bc85d02bdcc0","Type":"ContainerDied","Data":"73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98"} Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.726865 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39f81527-6bd1-4210-81d9-c4274447b5bd-operator-scripts\") pod \"novacell0459a-account-delete-gh72t\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.727085 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmp7\" (UniqueName: \"kubernetes.io/projected/39f81527-6bd1-4210-81d9-c4274447b5bd-kube-api-access-jtmp7\") pod \"novacell0459a-account-delete-gh72t\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.728663 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39f81527-6bd1-4210-81d9-c4274447b5bd-operator-scripts\") pod \"novacell0459a-account-delete-gh72t\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.750520 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmp7\" (UniqueName: \"kubernetes.io/projected/39f81527-6bd1-4210-81d9-c4274447b5bd-kube-api-access-jtmp7\") pod \"novacell0459a-account-delete-gh72t\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:37 crc kubenswrapper[5001]: E0128 17:46:37.829107 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:37 crc kubenswrapper[5001]: E0128 17:46:37.829172 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data podName:7d7efcb6-bbc3-4a64-83ec-66b36aea0fed nodeName:}" failed. No retries permitted until 2026-01-28 17:46:38.329155281 +0000 UTC m=+1844.496943501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:37 crc kubenswrapper[5001]: I0128 17:46:37.850438 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.182606 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell12af7-account-delete-hwpv2"] Jan 28 17:46:38 crc kubenswrapper[5001]: W0128 17:46:38.184545 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5024ec83_ca88_4a33_bc3a_c75444c3da94.slice/crio-0d85309c50de5b9e7f0ca53810d937084cf4c22b6c50e13a6aa4feae5520e813 WatchSource:0}: Error finding container 0d85309c50de5b9e7f0ca53810d937084cf4c22b6c50e13a6aa4feae5520e813: Status 404 returned error can't find the container with id 0d85309c50de5b9e7f0ca53810d937084cf4c22b6c50e13a6aa4feae5520e813 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.201042 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapieb14-account-delete-mx92g"] Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.345603 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.345919 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data podName:7d7efcb6-bbc3-4a64-83ec-66b36aea0fed nodeName:}" failed. No retries permitted until 2026-01-28 17:46:39.345900078 +0000 UTC m=+1845.513688308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.356482 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0459a-account-delete-gh72t"] Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.397968 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.448635 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-652jj\" (UniqueName: \"kubernetes.io/projected/1e5490c0-8d10-4966-85b4-e3045a16a80d-kube-api-access-652jj\") pod \"1e5490c0-8d10-4966-85b4-e3045a16a80d\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.448710 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5490c0-8d10-4966-85b4-e3045a16a80d-config-data\") pod \"1e5490c0-8d10-4966-85b4-e3045a16a80d\" (UID: \"1e5490c0-8d10-4966-85b4-e3045a16a80d\") " Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.462785 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5490c0-8d10-4966-85b4-e3045a16a80d-kube-api-access-652jj" (OuterVolumeSpecName: "kube-api-access-652jj") pod "1e5490c0-8d10-4966-85b4-e3045a16a80d" (UID: "1e5490c0-8d10-4966-85b4-e3045a16a80d"). InnerVolumeSpecName "kube-api-access-652jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.471072 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5490c0-8d10-4966-85b4-e3045a16a80d-config-data" (OuterVolumeSpecName: "config-data") pod "1e5490c0-8d10-4966-85b4-e3045a16a80d" (UID: "1e5490c0-8d10-4966-85b4-e3045a16a80d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.550716 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-652jj\" (UniqueName: \"kubernetes.io/projected/1e5490c0-8d10-4966-85b4-e3045a16a80d-kube-api-access-652jj\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.550757 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5490c0-8d10-4966-85b4-e3045a16a80d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.605776 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a2d207-76b3-42da-a996-dcd014b8fcba" path="/var/lib/kubelet/pods/30a2d207-76b3-42da-a996-dcd014b8fcba/volumes" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.606470 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6315e7d2-32f1-4654-b00b-cffdf8cf9879" path="/var/lib/kubelet/pods/6315e7d2-32f1-4654-b00b-cffdf8cf9879/volumes" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.607162 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66bc527a-0ef8-4620-9f6d-5894c0b8f34d" path="/var/lib/kubelet/pods/66bc527a-0ef8-4620-9f6d-5894c0b8f34d/volumes" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.608282 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f" path="/var/lib/kubelet/pods/9aa912e2-2bc6-49c9-8a67-c0c5893b9e4f/volumes" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.610234 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4daf12-6f31-45e0-b586-2387cc93d41a" path="/var/lib/kubelet/pods/dd4daf12-6f31-45e0-b586-2387cc93d41a/volumes" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.720470 5001 generic.go:334] "Generic (PLEG): container finished" podID="39f81527-6bd1-4210-81d9-c4274447b5bd" containerID="1c1c93db31f0de347cbce32e3e40024d64b4c0ab180d57b0198370768bebe8d8" exitCode=0 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.720533 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" event={"ID":"39f81527-6bd1-4210-81d9-c4274447b5bd","Type":"ContainerDied","Data":"1c1c93db31f0de347cbce32e3e40024d64b4c0ab180d57b0198370768bebe8d8"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.720558 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" event={"ID":"39f81527-6bd1-4210-81d9-c4274447b5bd","Type":"ContainerStarted","Data":"571b2cb8d401806ffc5a964bd9042145c332b46d4d9ec547c53e1ab406341026"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.722235 5001 generic.go:334] "Generic (PLEG): container finished" podID="951a4226-b9e2-4b48-86cf-efaa562b986e" containerID="a4b094028e9fac812a5a896365822769d8ee142a4a1843d07567743b8ac08660" exitCode=0 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.722306 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" event={"ID":"951a4226-b9e2-4b48-86cf-efaa562b986e","Type":"ContainerDied","Data":"a4b094028e9fac812a5a896365822769d8ee142a4a1843d07567743b8ac08660"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.722337 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" event={"ID":"951a4226-b9e2-4b48-86cf-efaa562b986e","Type":"ContainerStarted","Data":"2f0b2a04d2485ab5f2ba6b4cc4e1710edbe6ffb035503afca480d1643c0fc808"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.724008 5001 generic.go:334] "Generic (PLEG): container finished" podID="1e5490c0-8d10-4966-85b4-e3045a16a80d" containerID="98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05" exitCode=0 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.724059 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.724058 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"1e5490c0-8d10-4966-85b4-e3045a16a80d","Type":"ContainerDied","Data":"98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.724215 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"1e5490c0-8d10-4966-85b4-e3045a16a80d","Type":"ContainerDied","Data":"790d48bdd2f237bbd3e01334b3a7d8281ab9e6dba1b31ec14843b0af3d89670e"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.724247 5001 scope.go:117] "RemoveContainer" containerID="98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.725611 5001 generic.go:334] "Generic (PLEG): container finished" podID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerID="3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd" exitCode=143 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.725669 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"756717c8-aaaf-4961-bff8-1dbcebc3405b","Type":"ContainerDied","Data":"3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.726998 5001 generic.go:334] "Generic (PLEG): container finished" podID="5024ec83-ca88-4a33-bc3a-c75444c3da94" containerID="159e7a5c74518fee9b13070e1f65fa2dde0ebd3bd69cf6328280798fc68a188e" exitCode=0 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.727054 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" event={"ID":"5024ec83-ca88-4a33-bc3a-c75444c3da94","Type":"ContainerDied","Data":"159e7a5c74518fee9b13070e1f65fa2dde0ebd3bd69cf6328280798fc68a188e"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.727112 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" event={"ID":"5024ec83-ca88-4a33-bc3a-c75444c3da94","Type":"ContainerStarted","Data":"0d85309c50de5b9e7f0ca53810d937084cf4c22b6c50e13a6aa4feae5520e813"} Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.727142 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" gracePeriod=30 Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.782772 5001 scope.go:117] "RemoveContainer" containerID="98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05" Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.783163 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05\": container with ID starting with 98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05 not found: ID does not exist" containerID="98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.783200 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05"} err="failed to get container status \"98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05\": rpc error: code = NotFound desc = could not find container \"98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05\": container with ID starting with 98f7290f1249cc7d6fd6209d67e94944d60d3399fe6939ecce2b1ffb73347f05 not found: ID does not exist" Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.808187 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:46:38 crc kubenswrapper[5001]: I0128 17:46:38.816391 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.885331 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.886549 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.890298 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:38 crc kubenswrapper[5001]: E0128 17:46:38.890374 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="86936800-e6b8-42cd-ac73-b9213072b89e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.112115 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.114255 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.115644 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.115678 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.363120 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.363192 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data podName:7d7efcb6-bbc3-4a64-83ec-66b36aea0fed nodeName:}" failed. No retries permitted until 2026-01-28 17:46:41.363173986 +0000 UTC m=+1847.530962216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.646681 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.738311 5001 generic.go:334] "Generic (PLEG): container finished" podID="86936800-e6b8-42cd-ac73-b9213072b89e" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" exitCode=0 Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.738351 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.738388 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"86936800-e6b8-42cd-ac73-b9213072b89e","Type":"ContainerDied","Data":"4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a"} Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.738409 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"86936800-e6b8-42cd-ac73-b9213072b89e","Type":"ContainerDied","Data":"d824221b4a0749c659ae18fae74ae4963ca020f894b391bf07e446db29dc6563"} Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.738424 5001 scope.go:117] "RemoveContainer" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.767077 5001 scope.go:117] "RemoveContainer" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.769454 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86936800-e6b8-42cd-ac73-b9213072b89e-config-data\") pod \"86936800-e6b8-42cd-ac73-b9213072b89e\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.769500 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnzg9\" (UniqueName: \"kubernetes.io/projected/86936800-e6b8-42cd-ac73-b9213072b89e-kube-api-access-wnzg9\") pod \"86936800-e6b8-42cd-ac73-b9213072b89e\" (UID: \"86936800-e6b8-42cd-ac73-b9213072b89e\") " Jan 28 17:46:39 crc kubenswrapper[5001]: E0128 17:46:39.771716 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a\": container with ID starting with 4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a not found: ID does not exist" containerID="4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.771755 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a"} err="failed to get container status \"4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a\": rpc error: code = NotFound desc = could not find container \"4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a\": container with ID starting with 4e730d662f4f4c49291e1545b915ad6f3156a0d86b7f18ad4c9a17fbcdbaaa0a not found: ID does not exist" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.774187 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86936800-e6b8-42cd-ac73-b9213072b89e-kube-api-access-wnzg9" (OuterVolumeSpecName: "kube-api-access-wnzg9") pod "86936800-e6b8-42cd-ac73-b9213072b89e" (UID: "86936800-e6b8-42cd-ac73-b9213072b89e"). InnerVolumeSpecName "kube-api-access-wnzg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.793479 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86936800-e6b8-42cd-ac73-b9213072b89e-config-data" (OuterVolumeSpecName: "config-data") pod "86936800-e6b8-42cd-ac73-b9213072b89e" (UID: "86936800-e6b8-42cd-ac73-b9213072b89e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.870986 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86936800-e6b8-42cd-ac73-b9213072b89e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:39 crc kubenswrapper[5001]: I0128 17:46:39.871020 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnzg9\" (UniqueName: \"kubernetes.io/projected/86936800-e6b8-42cd-ac73-b9213072b89e-kube-api-access-wnzg9\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.094922 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.106116 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.113783 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.121622 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.122513 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.159770 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.177014 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024ec83-ca88-4a33-bc3a-c75444c3da94-operator-scripts\") pod \"5024ec83-ca88-4a33-bc3a-c75444c3da94\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.180483 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtmp7\" (UniqueName: \"kubernetes.io/projected/39f81527-6bd1-4210-81d9-c4274447b5bd-kube-api-access-jtmp7\") pod \"39f81527-6bd1-4210-81d9-c4274447b5bd\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.180555 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf2b8\" (UniqueName: \"kubernetes.io/projected/5024ec83-ca88-4a33-bc3a-c75444c3da94-kube-api-access-qf2b8\") pod \"5024ec83-ca88-4a33-bc3a-c75444c3da94\" (UID: \"5024ec83-ca88-4a33-bc3a-c75444c3da94\") " Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.180606 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39f81527-6bd1-4210-81d9-c4274447b5bd-operator-scripts\") pod \"39f81527-6bd1-4210-81d9-c4274447b5bd\" (UID: \"39f81527-6bd1-4210-81d9-c4274447b5bd\") " Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.180680 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/951a4226-b9e2-4b48-86cf-efaa562b986e-operator-scripts\") pod \"951a4226-b9e2-4b48-86cf-efaa562b986e\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.180716 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdjzw\" (UniqueName: \"kubernetes.io/projected/951a4226-b9e2-4b48-86cf-efaa562b986e-kube-api-access-gdjzw\") pod \"951a4226-b9e2-4b48-86cf-efaa562b986e\" (UID: \"951a4226-b9e2-4b48-86cf-efaa562b986e\") " Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.181192 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5024ec83-ca88-4a33-bc3a-c75444c3da94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5024ec83-ca88-4a33-bc3a-c75444c3da94" (UID: "5024ec83-ca88-4a33-bc3a-c75444c3da94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.181772 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024ec83-ca88-4a33-bc3a-c75444c3da94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.183589 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.187185 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951a4226-b9e2-4b48-86cf-efaa562b986e-kube-api-access-gdjzw" (OuterVolumeSpecName: "kube-api-access-gdjzw") pod "951a4226-b9e2-4b48-86cf-efaa562b986e" (UID: "951a4226-b9e2-4b48-86cf-efaa562b986e"). InnerVolumeSpecName "kube-api-access-gdjzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.187508 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f81527-6bd1-4210-81d9-c4274447b5bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39f81527-6bd1-4210-81d9-c4274447b5bd" (UID: "39f81527-6bd1-4210-81d9-c4274447b5bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.187903 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951a4226-b9e2-4b48-86cf-efaa562b986e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "951a4226-b9e2-4b48-86cf-efaa562b986e" (UID: "951a4226-b9e2-4b48-86cf-efaa562b986e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.194673 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f81527-6bd1-4210-81d9-c4274447b5bd-kube-api-access-jtmp7" (OuterVolumeSpecName: "kube-api-access-jtmp7") pod "39f81527-6bd1-4210-81d9-c4274447b5bd" (UID: "39f81527-6bd1-4210-81d9-c4274447b5bd"). InnerVolumeSpecName "kube-api-access-jtmp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.211400 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.211504 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.211759 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5024ec83-ca88-4a33-bc3a-c75444c3da94-kube-api-access-qf2b8" (OuterVolumeSpecName: "kube-api-access-qf2b8") pod "5024ec83-ca88-4a33-bc3a-c75444c3da94" (UID: "5024ec83-ca88-4a33-bc3a-c75444c3da94"). InnerVolumeSpecName "kube-api-access-qf2b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.283121 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtmp7\" (UniqueName: \"kubernetes.io/projected/39f81527-6bd1-4210-81d9-c4274447b5bd-kube-api-access-jtmp7\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.283185 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf2b8\" (UniqueName: \"kubernetes.io/projected/5024ec83-ca88-4a33-bc3a-c75444c3da94-kube-api-access-qf2b8\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.283200 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39f81527-6bd1-4210-81d9-c4274447b5bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.283211 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/951a4226-b9e2-4b48-86cf-efaa562b986e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.283225 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdjzw\" (UniqueName: \"kubernetes.io/projected/951a4226-b9e2-4b48-86cf-efaa562b986e-kube-api-access-gdjzw\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.604285 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5490c0-8d10-4966-85b4-e3045a16a80d" path="/var/lib/kubelet/pods/1e5490c0-8d10-4966-85b4-e3045a16a80d/volumes" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.604952 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86936800-e6b8-42cd-ac73-b9213072b89e" path="/var/lib/kubelet/pods/86936800-e6b8-42cd-ac73-b9213072b89e/volumes" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.726518 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.202:8775/\": read tcp 10.217.0.2:47018->10.217.0.202:8775: read: connection reset by peer" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.726819 5001 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.202:8775/\": read tcp 10.217.0.2:47026->10.217.0.202:8775: read: connection reset by peer" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.747638 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" event={"ID":"5024ec83-ca88-4a33-bc3a-c75444c3da94","Type":"ContainerDied","Data":"0d85309c50de5b9e7f0ca53810d937084cf4c22b6c50e13a6aa4feae5520e813"} Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.747685 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d85309c50de5b9e7f0ca53810d937084cf4c22b6c50e13a6aa4feae5520e813" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.747649 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell12af7-account-delete-hwpv2" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.750927 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.751011 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0459a-account-delete-gh72t" event={"ID":"39f81527-6bd1-4210-81d9-c4274447b5bd","Type":"ContainerDied","Data":"571b2cb8d401806ffc5a964bd9042145c332b46d4d9ec547c53e1ab406341026"} Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.751051 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="571b2cb8d401806ffc5a964bd9042145c332b46d4d9ec547c53e1ab406341026" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.752458 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" event={"ID":"951a4226-b9e2-4b48-86cf-efaa562b986e","Type":"ContainerDied","Data":"2f0b2a04d2485ab5f2ba6b4cc4e1710edbe6ffb035503afca480d1643c0fc808"} Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.752490 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f0b2a04d2485ab5f2ba6b4cc4e1710edbe6ffb035503afca480d1643c0fc808" Jan 28 17:46:40 crc kubenswrapper[5001]: I0128 17:46:40.752494 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapieb14-account-delete-mx92g" Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.982234 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.983764 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.984964 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 17:46:40 crc kubenswrapper[5001]: E0128 17:46:40.985029 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.151214 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.196523 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756717c8-aaaf-4961-bff8-1dbcebc3405b-config-data\") pod \"756717c8-aaaf-4961-bff8-1dbcebc3405b\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.196608 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756717c8-aaaf-4961-bff8-1dbcebc3405b-logs\") pod \"756717c8-aaaf-4961-bff8-1dbcebc3405b\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.196682 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrbgw\" (UniqueName: \"kubernetes.io/projected/756717c8-aaaf-4961-bff8-1dbcebc3405b-kube-api-access-qrbgw\") pod \"756717c8-aaaf-4961-bff8-1dbcebc3405b\" (UID: \"756717c8-aaaf-4961-bff8-1dbcebc3405b\") " Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.200461 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/756717c8-aaaf-4961-bff8-1dbcebc3405b-logs" (OuterVolumeSpecName: "logs") pod "756717c8-aaaf-4961-bff8-1dbcebc3405b" (UID: "756717c8-aaaf-4961-bff8-1dbcebc3405b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.203459 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756717c8-aaaf-4961-bff8-1dbcebc3405b-kube-api-access-qrbgw" (OuterVolumeSpecName: "kube-api-access-qrbgw") pod "756717c8-aaaf-4961-bff8-1dbcebc3405b" (UID: "756717c8-aaaf-4961-bff8-1dbcebc3405b"). InnerVolumeSpecName "kube-api-access-qrbgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.226112 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756717c8-aaaf-4961-bff8-1dbcebc3405b-config-data" (OuterVolumeSpecName: "config-data") pod "756717c8-aaaf-4961-bff8-1dbcebc3405b" (UID: "756717c8-aaaf-4961-bff8-1dbcebc3405b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.270581 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.299617 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhlwd\" (UniqueName: \"kubernetes.io/projected/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-kube-api-access-nhlwd\") pod \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.299767 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-logs\") pod \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.300816 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-logs" (OuterVolumeSpecName: "logs") pod "152386d6-cdfe-4d11-91b1-bc85d02bdcc0" (UID: "152386d6-cdfe-4d11-91b1-bc85d02bdcc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.300962 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-config-data\") pod \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\" (UID: \"152386d6-cdfe-4d11-91b1-bc85d02bdcc0\") " Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.302655 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.302681 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756717c8-aaaf-4961-bff8-1dbcebc3405b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.302694 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756717c8-aaaf-4961-bff8-1dbcebc3405b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.302706 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrbgw\" (UniqueName: \"kubernetes.io/projected/756717c8-aaaf-4961-bff8-1dbcebc3405b-kube-api-access-qrbgw\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.313053 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-kube-api-access-nhlwd" (OuterVolumeSpecName: "kube-api-access-nhlwd") pod "152386d6-cdfe-4d11-91b1-bc85d02bdcc0" (UID: "152386d6-cdfe-4d11-91b1-bc85d02bdcc0"). InnerVolumeSpecName "kube-api-access-nhlwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.323380 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-config-data" (OuterVolumeSpecName: "config-data") pod "152386d6-cdfe-4d11-91b1-bc85d02bdcc0" (UID: "152386d6-cdfe-4d11-91b1-bc85d02bdcc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.404729 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.404770 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhlwd\" (UniqueName: \"kubernetes.io/projected/152386d6-cdfe-4d11-91b1-bc85d02bdcc0-kube-api-access-nhlwd\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:41 crc kubenswrapper[5001]: E0128 17:46:41.404863 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:41 crc kubenswrapper[5001]: E0128 17:46:41.404930 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data podName:7d7efcb6-bbc3-4a64-83ec-66b36aea0fed nodeName:}" failed. No retries permitted until 2026-01-28 17:46:45.404911259 +0000 UTC m=+1851.572699489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.765750 5001 generic.go:334] "Generic (PLEG): container finished" podID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerID="7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe" exitCode=0 Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.765823 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.765850 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"152386d6-cdfe-4d11-91b1-bc85d02bdcc0","Type":"ContainerDied","Data":"7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe"} Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.766474 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"152386d6-cdfe-4d11-91b1-bc85d02bdcc0","Type":"ContainerDied","Data":"6442506b4c6992f131b739b6d686a892cd229b4d595807ac9dc5cf41fac6238e"} Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.766497 5001 scope.go:117] "RemoveContainer" containerID="7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.770819 5001 generic.go:334] "Generic (PLEG): container finished" podID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerID="f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd" exitCode=0 Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.770886 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"756717c8-aaaf-4961-bff8-1dbcebc3405b","Type":"ContainerDied","Data":"f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd"} Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.770908 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.770915 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"756717c8-aaaf-4961-bff8-1dbcebc3405b","Type":"ContainerDied","Data":"726065172f51bfe73eace220e2346f25b77da65e6931f3bc6ab5a08f7db49240"} Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.795642 5001 scope.go:117] "RemoveContainer" containerID="73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.817911 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.825188 5001 scope.go:117] "RemoveContainer" containerID="7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe" Jan 28 17:46:41 crc kubenswrapper[5001]: E0128 17:46:41.825689 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe\": container with ID starting with 7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe not found: ID does not exist" containerID="7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.825721 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe"} err="failed to get container status \"7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe\": rpc error: code = NotFound desc = could not find container \"7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe\": container with ID starting with 7bf8f53cfedb755719e00ebf81dd03a6cbc2a1747244e3740db4a8a92692ecbe not found: ID does not exist" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.825742 5001 scope.go:117] "RemoveContainer" containerID="73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98" Jan 28 17:46:41 crc kubenswrapper[5001]: E0128 17:46:41.825929 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98\": container with ID starting with 73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98 not found: ID does not exist" containerID="73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.825949 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98"} err="failed to get container status \"73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98\": rpc error: code = NotFound desc = could not find container \"73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98\": container with ID starting with 73a822a8de0350504d062b46850c92895ac174850d64800c3b07873b46127b98 not found: ID does not exist" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.825962 5001 scope.go:117] "RemoveContainer" containerID="f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.831116 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.842744 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.847293 5001 scope.go:117] "RemoveContainer" containerID="3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.851413 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.873718 5001 scope.go:117] "RemoveContainer" containerID="f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd" Jan 28 17:46:41 crc kubenswrapper[5001]: E0128 17:46:41.874259 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd\": container with ID starting with f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd not found: ID does not exist" containerID="f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.874297 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd"} err="failed to get container status \"f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd\": rpc error: code = NotFound desc = could not find container \"f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd\": container with ID starting with f24893eca4729ccd8cccea9023d01a6f1d4ec092fefc8df91b521043f211ecfd not found: ID does not exist" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.874319 5001 scope.go:117] "RemoveContainer" containerID="3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd" Jan 28 17:46:41 crc kubenswrapper[5001]: E0128 17:46:41.874763 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd\": container with ID starting with 3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd not found: ID does not exist" containerID="3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd" Jan 28 17:46:41 crc kubenswrapper[5001]: I0128 17:46:41.874789 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd"} err="failed to get container status \"3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd\": rpc error: code = NotFound desc = could not find container \"3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd\": container with ID starting with 3fd2cd67926cebadba1b08d9bdc44694691fdf05c87db467ade93c45539a16cd not found: ID does not exist" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.334487 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-7q4dw"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.343547 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-7q4dw"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.349899 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.355403 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell12af7-account-delete-hwpv2"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.361764 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell12af7-account-delete-hwpv2"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.390275 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-2af7-account-create-update-f6zpn"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.424101 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z89s5"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.435244 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z89s5"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.445513 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.462684 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapieb14-account-delete-mx92g"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.509683 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.515798 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-eb14-account-create-update-bfbhj"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.521451 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapieb14-account-delete-mx92g"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.522027 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-config-data\") pod \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.522080 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v4l2\" (UniqueName: \"kubernetes.io/projected/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-kube-api-access-4v4l2\") pod \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\" (UID: \"42a5d1c3-1661-46d5-8bca-3a91dbe1d816\") " Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.528819 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-kube-api-access-4v4l2" (OuterVolumeSpecName: "kube-api-access-4v4l2") pod "42a5d1c3-1661-46d5-8bca-3a91dbe1d816" (UID: "42a5d1c3-1661-46d5-8bca-3a91dbe1d816"). InnerVolumeSpecName "kube-api-access-4v4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.563446 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-s9qwz"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.565629 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-config-data" (OuterVolumeSpecName: "config-data") pod "42a5d1c3-1661-46d5-8bca-3a91dbe1d816" (UID: "42a5d1c3-1661-46d5-8bca-3a91dbe1d816"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.581995 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-s9qwz"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.588070 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0459a-account-delete-gh72t"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.593204 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0459a-account-delete-gh72t"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.605689 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0906dfe4-e13d-4c1b-a310-9ece4e46a3d6" path="/var/lib/kubelet/pods/0906dfe4-e13d-4c1b-a310-9ece4e46a3d6/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.606890 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" path="/var/lib/kubelet/pods/152386d6-cdfe-4d11-91b1-bc85d02bdcc0/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.607682 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b" path="/var/lib/kubelet/pods/16ef7ca3-f0ea-46e1-b9a4-10c9fa59995b/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.609168 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f81527-6bd1-4210-81d9-c4274447b5bd" path="/var/lib/kubelet/pods/39f81527-6bd1-4210-81d9-c4274447b5bd/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.609793 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5024ec83-ca88-4a33-bc3a-c75444c3da94" path="/var/lib/kubelet/pods/5024ec83-ca88-4a33-bc3a-c75444c3da94/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.610716 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687d3405-fbd8-4494-9398-c9e4be313cd9" path="/var/lib/kubelet/pods/687d3405-fbd8-4494-9398-c9e4be313cd9/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.611388 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" path="/var/lib/kubelet/pods/756717c8-aaaf-4961-bff8-1dbcebc3405b/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.613145 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c50528-48fd-4603-9f3c-6217d58ca8d1" path="/var/lib/kubelet/pods/83c50528-48fd-4603-9f3c-6217d58ca8d1/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.614129 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951a4226-b9e2-4b48-86cf-efaa562b986e" path="/var/lib/kubelet/pods/951a4226-b9e2-4b48-86cf-efaa562b986e/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.614863 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ca8ffb-f4c8-4394-9b96-85de383793b8" path="/var/lib/kubelet/pods/a7ca8ffb-f4c8-4394-9b96-85de383793b8/volumes" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.616670 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.616746 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-459a-account-create-update-tgh8r"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.624800 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.624928 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v4l2\" (UniqueName: \"kubernetes.io/projected/42a5d1c3-1661-46d5-8bca-3a91dbe1d816-kube-api-access-4v4l2\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.784872 5001 generic.go:334] "Generic (PLEG): container finished" podID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" exitCode=0 Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.784935 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.784993 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"42a5d1c3-1661-46d5-8bca-3a91dbe1d816","Type":"ContainerDied","Data":"df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442"} Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.785033 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"42a5d1c3-1661-46d5-8bca-3a91dbe1d816","Type":"ContainerDied","Data":"34c611eaf5a9a27c576371e2269cef87e6ca3a0f33dc464a0615d87181c2509f"} Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.785057 5001 scope.go:117] "RemoveContainer" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.816667 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.820930 5001 scope.go:117] "RemoveContainer" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" Jan 28 17:46:42 crc kubenswrapper[5001]: E0128 17:46:42.821466 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442\": container with ID starting with df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442 not found: ID does not exist" containerID="df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.821524 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442"} err="failed to get container status \"df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442\": rpc error: code = NotFound desc = could not find container \"df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442\": container with ID starting with df13e5166aa89bdf11914322b33d0b320fdf9f9e06bd134579be5e12b3d5c442 not found: ID does not exist" Jan 28 17:46:42 crc kubenswrapper[5001]: I0128 17:46:42.826380 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.417333 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.439101 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-config-data\") pod \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.439261 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6bbn\" (UniqueName: \"kubernetes.io/projected/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-kube-api-access-p6bbn\") pod \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\" (UID: \"2c6eeeaf-d3e0-44af-a035-17e0a315fc53\") " Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.445773 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-kube-api-access-p6bbn" (OuterVolumeSpecName: "kube-api-access-p6bbn") pod "2c6eeeaf-d3e0-44af-a035-17e0a315fc53" (UID: "2c6eeeaf-d3e0-44af-a035-17e0a315fc53"). InnerVolumeSpecName "kube-api-access-p6bbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.463360 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-config-data" (OuterVolumeSpecName: "config-data") pod "2c6eeeaf-d3e0-44af-a035-17e0a315fc53" (UID: "2c6eeeaf-d3e0-44af-a035-17e0a315fc53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.540452 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6bbn\" (UniqueName: \"kubernetes.io/projected/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-kube-api-access-p6bbn\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.540487 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6eeeaf-d3e0-44af-a035-17e0a315fc53-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.800055 5001 generic.go:334] "Generic (PLEG): container finished" podID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" exitCode=0 Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.800131 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"2c6eeeaf-d3e0-44af-a035-17e0a315fc53","Type":"ContainerDied","Data":"b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d"} Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.800326 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"2c6eeeaf-d3e0-44af-a035-17e0a315fc53","Type":"ContainerDied","Data":"6a93a27fd9be38e3b22ff59d2283f2d7fbec4b1adbbdefa7dd98ffb71a723445"} Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.800163 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.800344 5001 scope.go:117] "RemoveContainer" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.827071 5001 scope.go:117] "RemoveContainer" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" Jan 28 17:46:43 crc kubenswrapper[5001]: E0128 17:46:43.827669 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d\": container with ID starting with b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d not found: ID does not exist" containerID="b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.827710 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d"} err="failed to get container status \"b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d\": rpc error: code = NotFound desc = could not find container \"b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d\": container with ID starting with b9681d61cb44ce8dfff6bb637148f27f755b2b4ae8d2268977594c67a8dfc34d not found: ID does not exist" Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.851117 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:43 crc kubenswrapper[5001]: I0128 17:46:43.859865 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.112752 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.114508 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.116154 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.116219 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460455 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-pcn2s"] Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460815 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-log" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460840 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-log" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460853 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5024ec83-ca88-4a33-bc3a-c75444c3da94" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460862 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="5024ec83-ca88-4a33-bc3a-c75444c3da94" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460878 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86936800-e6b8-42cd-ac73-b9213072b89e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460885 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="86936800-e6b8-42cd-ac73-b9213072b89e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460901 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-log" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460908 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-log" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460922 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5490c0-8d10-4966-85b4-e3045a16a80d" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460930 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5490c0-8d10-4966-85b4-e3045a16a80d" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460940 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460950 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460968 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f81527-6bd1-4210-81d9-c4274447b5bd" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460978 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f81527-6bd1-4210-81d9-c4274447b5bd" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.460989 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.460996 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.461006 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951a4226-b9e2-4b48-86cf-efaa562b986e" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461030 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="951a4226-b9e2-4b48-86cf-efaa562b986e" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.461044 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-metadata" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461051 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-metadata" Jan 28 17:46:44 crc kubenswrapper[5001]: E0128 17:46:44.461068 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-api" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461076 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-api" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461253 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461275 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461288 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="5024ec83-ca88-4a33-bc3a-c75444c3da94" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461296 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-metadata" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461309 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5490c0-8d10-4966-85b4-e3045a16a80d" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461323 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="951a4226-b9e2-4b48-86cf-efaa562b986e" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461337 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="756717c8-aaaf-4961-bff8-1dbcebc3405b" containerName="nova-kuttl-metadata-log" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461346 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-log" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461356 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="152386d6-cdfe-4d11-91b1-bc85d02bdcc0" containerName="nova-kuttl-api-api" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461364 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="86936800-e6b8-42cd-ac73-b9213072b89e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461372 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f81527-6bd1-4210-81d9-c4274447b5bd" containerName="mariadb-account-delete" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.461978 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.468270 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-pcn2s"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.556030 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgndb\" (UniqueName: \"kubernetes.io/projected/b747464b-8e55-4056-a282-11d656d849dc-kube-api-access-sgndb\") pod \"nova-api-db-create-pcn2s\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.556140 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b747464b-8e55-4056-a282-11d656d849dc-operator-scripts\") pod \"nova-api-db-create-pcn2s\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.560331 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-tb26s"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.563409 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.573718 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-tb26s"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.613668 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6eeeaf-d3e0-44af-a035-17e0a315fc53" path="/var/lib/kubelet/pods/2c6eeeaf-d3e0-44af-a035-17e0a315fc53/volumes" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.614199 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a5d1c3-1661-46d5-8bca-3a91dbe1d816" path="/var/lib/kubelet/pods/42a5d1c3-1661-46d5-8bca-3a91dbe1d816/volumes" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.614646 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b05d2d1-cd75-4f7f-b89b-82192c1fb216" path="/var/lib/kubelet/pods/8b05d2d1-cd75-4f7f-b89b-82192c1fb216/volumes" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.657386 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgndb\" (UniqueName: \"kubernetes.io/projected/b747464b-8e55-4056-a282-11d656d849dc-kube-api-access-sgndb\") pod \"nova-api-db-create-pcn2s\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.657563 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b747464b-8e55-4056-a282-11d656d849dc-operator-scripts\") pod \"nova-api-db-create-pcn2s\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.657622 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9bbr\" (UniqueName: \"kubernetes.io/projected/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-kube-api-access-r9bbr\") pod \"nova-cell0-db-create-tb26s\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.657670 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-operator-scripts\") pod \"nova-cell0-db-create-tb26s\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.658548 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b747464b-8e55-4056-a282-11d656d849dc-operator-scripts\") pod \"nova-api-db-create-pcn2s\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.673203 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-84be-account-create-update-rzrth"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.674793 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.680997 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-84be-account-create-update-rzrth"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.716803 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgndb\" (UniqueName: \"kubernetes.io/projected/b747464b-8e55-4056-a282-11d656d849dc-kube-api-access-sgndb\") pod \"nova-api-db-create-pcn2s\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.720903 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.758635 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-operator-scripts\") pod \"nova-cell0-db-create-tb26s\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.759446 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dd9e3-fa3a-413f-909f-356d87e51427-operator-scripts\") pod \"nova-api-84be-account-create-update-rzrth\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.759580 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bbr\" (UniqueName: \"kubernetes.io/projected/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-kube-api-access-r9bbr\") pod \"nova-cell0-db-create-tb26s\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.759711 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snqfg\" (UniqueName: \"kubernetes.io/projected/849dd9e3-fa3a-413f-909f-356d87e51427-kube-api-access-snqfg\") pod \"nova-api-84be-account-create-update-rzrth\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.759775 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-operator-scripts\") pod \"nova-cell0-db-create-tb26s\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.764908 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-5gvgn"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.766072 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.775401 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-5gvgn"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.780187 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bbr\" (UniqueName: \"kubernetes.io/projected/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-kube-api-access-r9bbr\") pod \"nova-cell0-db-create-tb26s\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.780282 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.860983 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09009c8-2bc4-4a46-8243-9f226a5aa244-operator-scripts\") pod \"nova-cell1-db-create-5gvgn\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.861084 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dd9e3-fa3a-413f-909f-356d87e51427-operator-scripts\") pod \"nova-api-84be-account-create-update-rzrth\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.861113 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5ds\" (UniqueName: \"kubernetes.io/projected/d09009c8-2bc4-4a46-8243-9f226a5aa244-kube-api-access-6c5ds\") pod \"nova-cell1-db-create-5gvgn\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.861136 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snqfg\" (UniqueName: \"kubernetes.io/projected/849dd9e3-fa3a-413f-909f-356d87e51427-kube-api-access-snqfg\") pod \"nova-api-84be-account-create-update-rzrth\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.861833 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dd9e3-fa3a-413f-909f-356d87e51427-operator-scripts\") pod \"nova-api-84be-account-create-update-rzrth\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.876642 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.881175 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.882449 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.888765 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snqfg\" (UniqueName: \"kubernetes.io/projected/849dd9e3-fa3a-413f-909f-356d87e51427-kube-api-access-snqfg\") pod \"nova-api-84be-account-create-update-rzrth\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.889472 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.901757 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42"] Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.962762 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874f309a-0607-4381-99fa-eb25d0e34f02-operator-scripts\") pod \"nova-cell0-6ca3-account-create-update-q8v42\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.963196 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdftk\" (UniqueName: \"kubernetes.io/projected/874f309a-0607-4381-99fa-eb25d0e34f02-kube-api-access-gdftk\") pod \"nova-cell0-6ca3-account-create-update-q8v42\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.963400 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09009c8-2bc4-4a46-8243-9f226a5aa244-operator-scripts\") pod \"nova-cell1-db-create-5gvgn\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.963491 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5ds\" (UniqueName: \"kubernetes.io/projected/d09009c8-2bc4-4a46-8243-9f226a5aa244-kube-api-access-6c5ds\") pod \"nova-cell1-db-create-5gvgn\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.964404 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09009c8-2bc4-4a46-8243-9f226a5aa244-operator-scripts\") pod \"nova-cell1-db-create-5gvgn\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:44 crc kubenswrapper[5001]: I0128 17:46:44.979606 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5ds\" (UniqueName: \"kubernetes.io/projected/d09009c8-2bc4-4a46-8243-9f226a5aa244-kube-api-access-6c5ds\") pod \"nova-cell1-db-create-5gvgn\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.058066 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.065820 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874f309a-0607-4381-99fa-eb25d0e34f02-operator-scripts\") pod \"nova-cell0-6ca3-account-create-update-q8v42\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.065904 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdftk\" (UniqueName: \"kubernetes.io/projected/874f309a-0607-4381-99fa-eb25d0e34f02-kube-api-access-gdftk\") pod \"nova-cell0-6ca3-account-create-update-q8v42\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.066655 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874f309a-0607-4381-99fa-eb25d0e34f02-operator-scripts\") pod \"nova-cell0-6ca3-account-create-update-q8v42\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.080421 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.085330 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp"] Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.086367 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.090189 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdftk\" (UniqueName: \"kubernetes.io/projected/874f309a-0607-4381-99fa-eb25d0e34f02-kube-api-access-gdftk\") pod \"nova-cell0-6ca3-account-create-update-q8v42\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.094269 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.094854 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp"] Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.167834 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzdx\" (UniqueName: \"kubernetes.io/projected/b9c76350-9eb6-495e-874f-a186c207bea6-kube-api-access-lqzdx\") pod \"nova-cell1-cda9-account-create-update-cwncp\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.167966 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c76350-9eb6-495e-874f-a186c207bea6-operator-scripts\") pod \"nova-cell1-cda9-account-create-update-cwncp\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.232415 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.269480 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzdx\" (UniqueName: \"kubernetes.io/projected/b9c76350-9eb6-495e-874f-a186c207bea6-kube-api-access-lqzdx\") pod \"nova-cell1-cda9-account-create-update-cwncp\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.269518 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c76350-9eb6-495e-874f-a186c207bea6-operator-scripts\") pod \"nova-cell1-cda9-account-create-update-cwncp\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.270246 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c76350-9eb6-495e-874f-a186c207bea6-operator-scripts\") pod \"nova-cell1-cda9-account-create-update-cwncp\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.297145 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzdx\" (UniqueName: \"kubernetes.io/projected/b9c76350-9eb6-495e-874f-a186c207bea6-kube-api-access-lqzdx\") pod \"nova-cell1-cda9-account-create-update-cwncp\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.300670 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-pcn2s"] Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.409061 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.438560 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-tb26s"] Jan 28 17:46:45 crc kubenswrapper[5001]: E0128 17:46:45.473123 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:45 crc kubenswrapper[5001]: E0128 17:46:45.473198 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data podName:7d7efcb6-bbc3-4a64-83ec-66b36aea0fed nodeName:}" failed. No retries permitted until 2026-01-28 17:46:53.473181625 +0000 UTC m=+1859.640969855 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.562884 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-84be-account-create-update-rzrth"] Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.576335 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-5gvgn"] Jan 28 17:46:45 crc kubenswrapper[5001]: W0128 17:46:45.601210 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd09009c8_2bc4_4a46_8243_9f226a5aa244.slice/crio-5e95822b597f6d69ffd1ce919d0919d2c604a49fe704ce89af4f43fbe8b9a259 WatchSource:0}: Error finding container 5e95822b597f6d69ffd1ce919d0919d2c604a49fe704ce89af4f43fbe8b9a259: Status 404 returned error can't find the container with id 5e95822b597f6d69ffd1ce919d0919d2c604a49fe704ce89af4f43fbe8b9a259 Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.698428 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42"] Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.823015 5001 generic.go:334] "Generic (PLEG): container finished" podID="b747464b-8e55-4056-a282-11d656d849dc" containerID="40fec92a5a7a033031de6808dba64028963cf470744f8cc2acbf0a1137ab7322" exitCode=0 Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.823260 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-pcn2s" event={"ID":"b747464b-8e55-4056-a282-11d656d849dc","Type":"ContainerDied","Data":"40fec92a5a7a033031de6808dba64028963cf470744f8cc2acbf0a1137ab7322"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.823328 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-pcn2s" event={"ID":"b747464b-8e55-4056-a282-11d656d849dc","Type":"ContainerStarted","Data":"1c59af2012a93085d51df59d4593391b73bf5daa277be076ae06752284017c5e"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.827731 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" event={"ID":"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11","Type":"ContainerStarted","Data":"3d37cc8831ae3c4e892fa03e1574b1341938a982941cade14e14f31e6f349b09"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.827766 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" event={"ID":"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11","Type":"ContainerStarted","Data":"5d44bce69d47eeec668673ee6a4257fb4affa06a4b8fbbb011cdb948a17624b3"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.830192 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" event={"ID":"849dd9e3-fa3a-413f-909f-356d87e51427","Type":"ContainerStarted","Data":"60ad9422aec8abbf9dad0e9cf4ad47cf0de86d62b0a5a059a4af3a40ead4669c"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.830221 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" event={"ID":"849dd9e3-fa3a-413f-909f-356d87e51427","Type":"ContainerStarted","Data":"7e993217198163c52e46dee8606d72fd213b66d8fdcd2f30ad2afd1fbeedc7c0"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.832211 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" event={"ID":"d09009c8-2bc4-4a46-8243-9f226a5aa244","Type":"ContainerStarted","Data":"88a7e7523581a149933ecf7475ece153c78409eeb535fc1826f6f8a0820f1e66"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.832241 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" event={"ID":"d09009c8-2bc4-4a46-8243-9f226a5aa244","Type":"ContainerStarted","Data":"5e95822b597f6d69ffd1ce919d0919d2c604a49fe704ce89af4f43fbe8b9a259"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.833910 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" event={"ID":"874f309a-0607-4381-99fa-eb25d0e34f02","Type":"ContainerStarted","Data":"bafe1e57a6a43acd13e2d9709fbc979093c3cbdbf66659dfc890bd43567ddfd8"} Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.909391 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" podStartSLOduration=1.9093765889999998 podStartE2EDuration="1.909376589s" podCreationTimestamp="2026-01-28 17:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:45.906786234 +0000 UTC m=+1852.074574514" watchObservedRunningTime="2026-01-28 17:46:45.909376589 +0000 UTC m=+1852.077164819" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.931188 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" podStartSLOduration=1.9311647669999998 podStartE2EDuration="1.931164767s" podCreationTimestamp="2026-01-28 17:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:45.930275452 +0000 UTC m=+1852.098063682" watchObservedRunningTime="2026-01-28 17:46:45.931164767 +0000 UTC m=+1852.098952997" Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.947189 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp"] Jan 28 17:46:45 crc kubenswrapper[5001]: I0128 17:46:45.951772 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" podStartSLOduration=1.9517514409999999 podStartE2EDuration="1.951751441s" podCreationTimestamp="2026-01-28 17:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:45.945702167 +0000 UTC m=+1852.113490397" watchObservedRunningTime="2026-01-28 17:46:45.951751441 +0000 UTC m=+1852.119539671" Jan 28 17:46:45 crc kubenswrapper[5001]: W0128 17:46:45.952446 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9c76350_9eb6_495e_874f_a186c207bea6.slice/crio-9c979d95f678aec19e7f18b8b06108de1cf64d40457499e41cfc24c89d246e77 WatchSource:0}: Error finding container 9c979d95f678aec19e7f18b8b06108de1cf64d40457499e41cfc24c89d246e77: Status 404 returned error can't find the container with id 9c979d95f678aec19e7f18b8b06108de1cf64d40457499e41cfc24c89d246e77 Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.593892 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:46:46 crc kubenswrapper[5001]: E0128 17:46:46.594421 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.845558 5001 generic.go:334] "Generic (PLEG): container finished" podID="849dd9e3-fa3a-413f-909f-356d87e51427" containerID="60ad9422aec8abbf9dad0e9cf4ad47cf0de86d62b0a5a059a4af3a40ead4669c" exitCode=0 Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.845688 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" event={"ID":"849dd9e3-fa3a-413f-909f-356d87e51427","Type":"ContainerDied","Data":"60ad9422aec8abbf9dad0e9cf4ad47cf0de86d62b0a5a059a4af3a40ead4669c"} Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.847353 5001 generic.go:334] "Generic (PLEG): container finished" podID="7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" containerID="3d37cc8831ae3c4e892fa03e1574b1341938a982941cade14e14f31e6f349b09" exitCode=0 Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.847395 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" event={"ID":"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11","Type":"ContainerDied","Data":"3d37cc8831ae3c4e892fa03e1574b1341938a982941cade14e14f31e6f349b09"} Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.848707 5001 generic.go:334] "Generic (PLEG): container finished" podID="874f309a-0607-4381-99fa-eb25d0e34f02" containerID="daff58fa8338d8a89e855b146c86a7c35e4ea1a75d2a16b00dd9f2b82616eee7" exitCode=0 Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.848749 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" event={"ID":"874f309a-0607-4381-99fa-eb25d0e34f02","Type":"ContainerDied","Data":"daff58fa8338d8a89e855b146c86a7c35e4ea1a75d2a16b00dd9f2b82616eee7"} Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.850250 5001 generic.go:334] "Generic (PLEG): container finished" podID="d09009c8-2bc4-4a46-8243-9f226a5aa244" containerID="88a7e7523581a149933ecf7475ece153c78409eeb535fc1826f6f8a0820f1e66" exitCode=0 Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.850308 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" event={"ID":"d09009c8-2bc4-4a46-8243-9f226a5aa244","Type":"ContainerDied","Data":"88a7e7523581a149933ecf7475ece153c78409eeb535fc1826f6f8a0820f1e66"} Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.851809 5001 generic.go:334] "Generic (PLEG): container finished" podID="b9c76350-9eb6-495e-874f-a186c207bea6" containerID="4e05d46858e5c1e841905d6260e5a232b33fe918555b9e5925a5952104db8222" exitCode=0 Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.851891 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" event={"ID":"b9c76350-9eb6-495e-874f-a186c207bea6","Type":"ContainerDied","Data":"4e05d46858e5c1e841905d6260e5a232b33fe918555b9e5925a5952104db8222"} Jan 28 17:46:46 crc kubenswrapper[5001]: I0128 17:46:46.851941 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" event={"ID":"b9c76350-9eb6-495e-874f-a186c207bea6","Type":"ContainerStarted","Data":"9c979d95f678aec19e7f18b8b06108de1cf64d40457499e41cfc24c89d246e77"} Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.185219 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.224654 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b747464b-8e55-4056-a282-11d656d849dc-operator-scripts\") pod \"b747464b-8e55-4056-a282-11d656d849dc\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.224811 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgndb\" (UniqueName: \"kubernetes.io/projected/b747464b-8e55-4056-a282-11d656d849dc-kube-api-access-sgndb\") pod \"b747464b-8e55-4056-a282-11d656d849dc\" (UID: \"b747464b-8e55-4056-a282-11d656d849dc\") " Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.225268 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b747464b-8e55-4056-a282-11d656d849dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b747464b-8e55-4056-a282-11d656d849dc" (UID: "b747464b-8e55-4056-a282-11d656d849dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.232250 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b747464b-8e55-4056-a282-11d656d849dc-kube-api-access-sgndb" (OuterVolumeSpecName: "kube-api-access-sgndb") pod "b747464b-8e55-4056-a282-11d656d849dc" (UID: "b747464b-8e55-4056-a282-11d656d849dc"). InnerVolumeSpecName "kube-api-access-sgndb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.325961 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgndb\" (UniqueName: \"kubernetes.io/projected/b747464b-8e55-4056-a282-11d656d849dc-kube-api-access-sgndb\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.325990 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b747464b-8e55-4056-a282-11d656d849dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.861024 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-pcn2s" event={"ID":"b747464b-8e55-4056-a282-11d656d849dc","Type":"ContainerDied","Data":"1c59af2012a93085d51df59d4593391b73bf5daa277be076ae06752284017c5e"} Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.861063 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c59af2012a93085d51df59d4593391b73bf5daa277be076ae06752284017c5e" Jan 28 17:46:47 crc kubenswrapper[5001]: I0128 17:46:47.861092 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-pcn2s" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.289028 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.425247 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.432647 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.441273 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqzdx\" (UniqueName: \"kubernetes.io/projected/b9c76350-9eb6-495e-874f-a186c207bea6-kube-api-access-lqzdx\") pod \"b9c76350-9eb6-495e-874f-a186c207bea6\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.441331 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c76350-9eb6-495e-874f-a186c207bea6-operator-scripts\") pod \"b9c76350-9eb6-495e-874f-a186c207bea6\" (UID: \"b9c76350-9eb6-495e-874f-a186c207bea6\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.443007 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c76350-9eb6-495e-874f-a186c207bea6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9c76350-9eb6-495e-874f-a186c207bea6" (UID: "b9c76350-9eb6-495e-874f-a186c207bea6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.449082 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.454424 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c76350-9eb6-495e-874f-a186c207bea6-kube-api-access-lqzdx" (OuterVolumeSpecName: "kube-api-access-lqzdx") pod "b9c76350-9eb6-495e-874f-a186c207bea6" (UID: "b9c76350-9eb6-495e-874f-a186c207bea6"). InnerVolumeSpecName "kube-api-access-lqzdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.454564 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543168 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09009c8-2bc4-4a46-8243-9f226a5aa244-operator-scripts\") pod \"d09009c8-2bc4-4a46-8243-9f226a5aa244\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543205 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874f309a-0607-4381-99fa-eb25d0e34f02-operator-scripts\") pod \"874f309a-0607-4381-99fa-eb25d0e34f02\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543344 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c5ds\" (UniqueName: \"kubernetes.io/projected/d09009c8-2bc4-4a46-8243-9f226a5aa244-kube-api-access-6c5ds\") pod \"d09009c8-2bc4-4a46-8243-9f226a5aa244\" (UID: \"d09009c8-2bc4-4a46-8243-9f226a5aa244\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543383 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdftk\" (UniqueName: \"kubernetes.io/projected/874f309a-0607-4381-99fa-eb25d0e34f02-kube-api-access-gdftk\") pod \"874f309a-0607-4381-99fa-eb25d0e34f02\" (UID: \"874f309a-0607-4381-99fa-eb25d0e34f02\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543618 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874f309a-0607-4381-99fa-eb25d0e34f02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "874f309a-0607-4381-99fa-eb25d0e34f02" (UID: "874f309a-0607-4381-99fa-eb25d0e34f02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543630 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09009c8-2bc4-4a46-8243-9f226a5aa244-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d09009c8-2bc4-4a46-8243-9f226a5aa244" (UID: "d09009c8-2bc4-4a46-8243-9f226a5aa244"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543819 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqzdx\" (UniqueName: \"kubernetes.io/projected/b9c76350-9eb6-495e-874f-a186c207bea6-kube-api-access-lqzdx\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543841 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9c76350-9eb6-495e-874f-a186c207bea6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543850 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d09009c8-2bc4-4a46-8243-9f226a5aa244-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.543860 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/874f309a-0607-4381-99fa-eb25d0e34f02-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.546059 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09009c8-2bc4-4a46-8243-9f226a5aa244-kube-api-access-6c5ds" (OuterVolumeSpecName: "kube-api-access-6c5ds") pod "d09009c8-2bc4-4a46-8243-9f226a5aa244" (UID: "d09009c8-2bc4-4a46-8243-9f226a5aa244"). InnerVolumeSpecName "kube-api-access-6c5ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.546145 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874f309a-0607-4381-99fa-eb25d0e34f02-kube-api-access-gdftk" (OuterVolumeSpecName: "kube-api-access-gdftk") pod "874f309a-0607-4381-99fa-eb25d0e34f02" (UID: "874f309a-0607-4381-99fa-eb25d0e34f02"). InnerVolumeSpecName "kube-api-access-gdftk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.644341 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snqfg\" (UniqueName: \"kubernetes.io/projected/849dd9e3-fa3a-413f-909f-356d87e51427-kube-api-access-snqfg\") pod \"849dd9e3-fa3a-413f-909f-356d87e51427\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.644691 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dd9e3-fa3a-413f-909f-356d87e51427-operator-scripts\") pod \"849dd9e3-fa3a-413f-909f-356d87e51427\" (UID: \"849dd9e3-fa3a-413f-909f-356d87e51427\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.644835 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-operator-scripts\") pod \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.644986 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9bbr\" (UniqueName: \"kubernetes.io/projected/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-kube-api-access-r9bbr\") pod \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\" (UID: \"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11\") " Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.645244 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" (UID: "7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.645578 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.645671 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c5ds\" (UniqueName: \"kubernetes.io/projected/d09009c8-2bc4-4a46-8243-9f226a5aa244-kube-api-access-6c5ds\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.645760 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdftk\" (UniqueName: \"kubernetes.io/projected/874f309a-0607-4381-99fa-eb25d0e34f02-kube-api-access-gdftk\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.645581 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849dd9e3-fa3a-413f-909f-356d87e51427-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "849dd9e3-fa3a-413f-909f-356d87e51427" (UID: "849dd9e3-fa3a-413f-909f-356d87e51427"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.647803 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849dd9e3-fa3a-413f-909f-356d87e51427-kube-api-access-snqfg" (OuterVolumeSpecName: "kube-api-access-snqfg") pod "849dd9e3-fa3a-413f-909f-356d87e51427" (UID: "849dd9e3-fa3a-413f-909f-356d87e51427"). InnerVolumeSpecName "kube-api-access-snqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.648161 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-kube-api-access-r9bbr" (OuterVolumeSpecName: "kube-api-access-r9bbr") pod "7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" (UID: "7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11"). InnerVolumeSpecName "kube-api-access-r9bbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.747721 5001 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dd9e3-fa3a-413f-909f-356d87e51427-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.747764 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9bbr\" (UniqueName: \"kubernetes.io/projected/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11-kube-api-access-r9bbr\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.747778 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snqfg\" (UniqueName: \"kubernetes.io/projected/849dd9e3-fa3a-413f-909f-356d87e51427-kube-api-access-snqfg\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.872377 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" event={"ID":"d09009c8-2bc4-4a46-8243-9f226a5aa244","Type":"ContainerDied","Data":"5e95822b597f6d69ffd1ce919d0919d2c604a49fe704ce89af4f43fbe8b9a259"} Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.872434 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e95822b597f6d69ffd1ce919d0919d2c604a49fe704ce89af4f43fbe8b9a259" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.872399 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-5gvgn" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.873932 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" event={"ID":"b9c76350-9eb6-495e-874f-a186c207bea6","Type":"ContainerDied","Data":"9c979d95f678aec19e7f18b8b06108de1cf64d40457499e41cfc24c89d246e77"} Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.873965 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c979d95f678aec19e7f18b8b06108de1cf64d40457499e41cfc24c89d246e77" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.873968 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.875972 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" event={"ID":"849dd9e3-fa3a-413f-909f-356d87e51427","Type":"ContainerDied","Data":"7e993217198163c52e46dee8606d72fd213b66d8fdcd2f30ad2afd1fbeedc7c0"} Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.876066 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e993217198163c52e46dee8606d72fd213b66d8fdcd2f30ad2afd1fbeedc7c0" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.875968 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-84be-account-create-update-rzrth" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.892384 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" event={"ID":"7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11","Type":"ContainerDied","Data":"5d44bce69d47eeec668673ee6a4257fb4affa06a4b8fbbb011cdb948a17624b3"} Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.892435 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d44bce69d47eeec668673ee6a4257fb4affa06a4b8fbbb011cdb948a17624b3" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.892817 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-tb26s" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.895700 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" event={"ID":"874f309a-0607-4381-99fa-eb25d0e34f02","Type":"ContainerDied","Data":"bafe1e57a6a43acd13e2d9709fbc979093c3cbdbf66659dfc890bd43567ddfd8"} Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.895748 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafe1e57a6a43acd13e2d9709fbc979093c3cbdbf66659dfc890bd43567ddfd8" Jan 28 17:46:48 crc kubenswrapper[5001]: I0128 17:46:48.895808 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42" Jan 28 17:46:49 crc kubenswrapper[5001]: E0128 17:46:49.112963 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:49 crc kubenswrapper[5001]: E0128 17:46:49.114289 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:49 crc kubenswrapper[5001]: E0128 17:46:49.118667 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:49 crc kubenswrapper[5001]: E0128 17:46:49.118793 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117192 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz"] Jan 28 17:46:50 crc kubenswrapper[5001]: E0128 17:46:50.117536 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c76350-9eb6-495e-874f-a186c207bea6" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117552 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c76350-9eb6-495e-874f-a186c207bea6" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: E0128 17:46:50.117569 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09009c8-2bc4-4a46-8243-9f226a5aa244" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117579 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09009c8-2bc4-4a46-8243-9f226a5aa244" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: E0128 17:46:50.117591 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b747464b-8e55-4056-a282-11d656d849dc" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117601 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b747464b-8e55-4056-a282-11d656d849dc" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: E0128 17:46:50.117621 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117628 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: E0128 17:46:50.117644 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874f309a-0607-4381-99fa-eb25d0e34f02" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117652 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="874f309a-0607-4381-99fa-eb25d0e34f02" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: E0128 17:46:50.117669 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849dd9e3-fa3a-413f-909f-356d87e51427" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117677 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="849dd9e3-fa3a-413f-909f-356d87e51427" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117882 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09009c8-2bc4-4a46-8243-9f226a5aa244" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117900 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c76350-9eb6-495e-874f-a186c207bea6" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117911 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="874f309a-0607-4381-99fa-eb25d0e34f02" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117927 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117941 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b747464b-8e55-4056-a282-11d656d849dc" containerName="mariadb-database-create" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.117956 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="849dd9e3-fa3a-413f-909f-356d87e51427" containerName="mariadb-account-create-update" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.118548 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.120708 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-vr6w9" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.120966 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.122813 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.129283 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz"] Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.276049 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.276988 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc7z\" (UniqueName: \"kubernetes.io/projected/7f16db31-239d-4a00-8c6d-e50c10fbf407-kube-api-access-chc7z\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.277205 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.378389 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chc7z\" (UniqueName: \"kubernetes.io/projected/7f16db31-239d-4a00-8c6d-e50c10fbf407-kube-api-access-chc7z\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.378476 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.378546 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.382788 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.382971 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.396100 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chc7z\" (UniqueName: \"kubernetes.io/projected/7f16db31-239d-4a00-8c6d-e50c10fbf407-kube-api-access-chc7z\") pod \"nova-kuttl-cell0-conductor-db-sync-dvstz\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.434250 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.854710 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz"] Jan 28 17:46:50 crc kubenswrapper[5001]: W0128 17:46:50.861927 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f16db31_239d_4a00_8c6d_e50c10fbf407.slice/crio-ff0a14e5b91763d450c8a2798811230e4159ac3d37ab0a84b9ceba483e8b2feb WatchSource:0}: Error finding container ff0a14e5b91763d450c8a2798811230e4159ac3d37ab0a84b9ceba483e8b2feb: Status 404 returned error can't find the container with id ff0a14e5b91763d450c8a2798811230e4159ac3d37ab0a84b9ceba483e8b2feb Jan 28 17:46:50 crc kubenswrapper[5001]: I0128 17:46:50.915337 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" event={"ID":"7f16db31-239d-4a00-8c6d-e50c10fbf407","Type":"ContainerStarted","Data":"ff0a14e5b91763d450c8a2798811230e4159ac3d37ab0a84b9ceba483e8b2feb"} Jan 28 17:46:51 crc kubenswrapper[5001]: I0128 17:46:51.928278 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" event={"ID":"7f16db31-239d-4a00-8c6d-e50c10fbf407","Type":"ContainerStarted","Data":"6ee9353ab626a99c354759ba2ac6e9e2dd616c20b53c45f890b36a6819668b42"} Jan 28 17:46:51 crc kubenswrapper[5001]: I0128 17:46:51.943560 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" podStartSLOduration=1.943546001 podStartE2EDuration="1.943546001s" podCreationTimestamp="2026-01-28 17:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:46:51.942162061 +0000 UTC m=+1858.109950291" watchObservedRunningTime="2026-01-28 17:46:51.943546001 +0000 UTC m=+1858.111334231" Jan 28 17:46:53 crc kubenswrapper[5001]: E0128 17:46:53.526309 5001 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:53 crc kubenswrapper[5001]: E0128 17:46:53.526392 5001 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data podName:7d7efcb6-bbc3-4a64-83ec-66b36aea0fed nodeName:}" failed. No retries permitted until 2026-01-28 17:47:09.526374083 +0000 UTC m=+1875.694162333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 17:46:54 crc kubenswrapper[5001]: E0128 17:46:54.114511 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:54 crc kubenswrapper[5001]: E0128 17:46:54.116137 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:54 crc kubenswrapper[5001]: E0128 17:46:54.117493 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:54 crc kubenswrapper[5001]: E0128 17:46:54.117533 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:46:55 crc kubenswrapper[5001]: I0128 17:46:55.969909 5001 generic.go:334] "Generic (PLEG): container finished" podID="7f16db31-239d-4a00-8c6d-e50c10fbf407" containerID="6ee9353ab626a99c354759ba2ac6e9e2dd616c20b53c45f890b36a6819668b42" exitCode=0 Jan 28 17:46:55 crc kubenswrapper[5001]: I0128 17:46:55.969993 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" event={"ID":"7f16db31-239d-4a00-8c6d-e50c10fbf407","Type":"ContainerDied","Data":"6ee9353ab626a99c354759ba2ac6e9e2dd616c20b53c45f890b36a6819668b42"} Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.350678 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.440966 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chc7z\" (UniqueName: \"kubernetes.io/projected/7f16db31-239d-4a00-8c6d-e50c10fbf407-kube-api-access-chc7z\") pod \"7f16db31-239d-4a00-8c6d-e50c10fbf407\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.441081 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-config-data\") pod \"7f16db31-239d-4a00-8c6d-e50c10fbf407\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.441301 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-scripts\") pod \"7f16db31-239d-4a00-8c6d-e50c10fbf407\" (UID: \"7f16db31-239d-4a00-8c6d-e50c10fbf407\") " Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.446409 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-scripts" (OuterVolumeSpecName: "scripts") pod "7f16db31-239d-4a00-8c6d-e50c10fbf407" (UID: "7f16db31-239d-4a00-8c6d-e50c10fbf407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.446430 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f16db31-239d-4a00-8c6d-e50c10fbf407-kube-api-access-chc7z" (OuterVolumeSpecName: "kube-api-access-chc7z") pod "7f16db31-239d-4a00-8c6d-e50c10fbf407" (UID: "7f16db31-239d-4a00-8c6d-e50c10fbf407"). InnerVolumeSpecName "kube-api-access-chc7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.463905 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-config-data" (OuterVolumeSpecName: "config-data") pod "7f16db31-239d-4a00-8c6d-e50c10fbf407" (UID: "7f16db31-239d-4a00-8c6d-e50c10fbf407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.542883 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.542920 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f16db31-239d-4a00-8c6d-e50c10fbf407-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.542932 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chc7z\" (UniqueName: \"kubernetes.io/projected/7f16db31-239d-4a00-8c6d-e50c10fbf407-kube-api-access-chc7z\") on node \"crc\" DevicePath \"\"" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.988077 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" event={"ID":"7f16db31-239d-4a00-8c6d-e50c10fbf407","Type":"ContainerDied","Data":"ff0a14e5b91763d450c8a2798811230e4159ac3d37ab0a84b9ceba483e8b2feb"} Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.988129 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff0a14e5b91763d450c8a2798811230e4159ac3d37ab0a84b9ceba483e8b2feb" Jan 28 17:46:57 crc kubenswrapper[5001]: I0128 17:46:57.988126 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.065499 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:58 crc kubenswrapper[5001]: E0128 17:46:58.066771 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f16db31-239d-4a00-8c6d-e50c10fbf407" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.066805 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f16db31-239d-4a00-8c6d-e50c10fbf407" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.067152 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f16db31-239d-4a00-8c6d-e50c10fbf407" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.068353 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.070642 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-vr6w9" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.071386 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.076154 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.254042 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmpjb\" (UniqueName: \"kubernetes.io/projected/99fae513-1f96-42f3-9e69-e55b82c047dc-kube-api-access-jmpjb\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"99fae513-1f96-42f3-9e69-e55b82c047dc\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.254122 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fae513-1f96-42f3-9e69-e55b82c047dc-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"99fae513-1f96-42f3-9e69-e55b82c047dc\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.355665 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmpjb\" (UniqueName: \"kubernetes.io/projected/99fae513-1f96-42f3-9e69-e55b82c047dc-kube-api-access-jmpjb\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"99fae513-1f96-42f3-9e69-e55b82c047dc\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.355719 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fae513-1f96-42f3-9e69-e55b82c047dc-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"99fae513-1f96-42f3-9e69-e55b82c047dc\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.361904 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fae513-1f96-42f3-9e69-e55b82c047dc-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"99fae513-1f96-42f3-9e69-e55b82c047dc\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.391346 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmpjb\" (UniqueName: \"kubernetes.io/projected/99fae513-1f96-42f3-9e69-e55b82c047dc-kube-api-access-jmpjb\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"99fae513-1f96-42f3-9e69-e55b82c047dc\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.409443 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:46:58 crc kubenswrapper[5001]: I0128 17:46:58.896748 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 17:46:59 crc kubenswrapper[5001]: I0128 17:46:58.998884 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"99fae513-1f96-42f3-9e69-e55b82c047dc","Type":"ContainerStarted","Data":"2ea09868c484af1ba6d2c62ed6efe256aab1091fbe97ea7b1817ce66763ca29f"} Jan 28 17:46:59 crc kubenswrapper[5001]: E0128 17:46:59.112521 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:59 crc kubenswrapper[5001]: E0128 17:46:59.113915 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:59 crc kubenswrapper[5001]: E0128 17:46:59.114899 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:46:59 crc kubenswrapper[5001]: E0128 17:46:59.114936 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:00 crc kubenswrapper[5001]: I0128 17:47:00.007495 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"99fae513-1f96-42f3-9e69-e55b82c047dc","Type":"ContainerStarted","Data":"64d5ca61c2b8f05c6347b2dd9efc65340fca9789266522534b5e5e2668b1c45a"} Jan 28 17:47:00 crc kubenswrapper[5001]: I0128 17:47:00.007988 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:47:00 crc kubenswrapper[5001]: I0128 17:47:00.025307 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.025287452 podStartE2EDuration="2.025287452s" podCreationTimestamp="2026-01-28 17:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:00.020849354 +0000 UTC m=+1866.188637604" watchObservedRunningTime="2026-01-28 17:47:00.025287452 +0000 UTC m=+1866.193075682" Jan 28 17:47:01 crc kubenswrapper[5001]: I0128 17:47:01.594960 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:47:01 crc kubenswrapper[5001]: E0128 17:47:01.595637 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:47:04 crc kubenswrapper[5001]: E0128 17:47:04.112681 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:47:04 crc kubenswrapper[5001]: E0128 17:47:04.114682 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:47:04 crc kubenswrapper[5001]: E0128 17:47:04.116345 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:47:04 crc kubenswrapper[5001]: E0128 17:47:04.116396 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:08 crc kubenswrapper[5001]: I0128 17:47:08.441818 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 17:47:08 crc kubenswrapper[5001]: I0128 17:47:08.933149 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78"] Jan 28 17:47:08 crc kubenswrapper[5001]: I0128 17:47:08.934302 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:08 crc kubenswrapper[5001]: I0128 17:47:08.939458 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 17:47:08 crc kubenswrapper[5001]: I0128 17:47:08.939704 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 17:47:08 crc kubenswrapper[5001]: I0128 17:47:08.946868 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.030730 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-config-data\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.030784 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dwr\" (UniqueName: \"kubernetes.io/projected/c91ec979-f8eb-45b2-af41-a2040b954d89-kube-api-access-76dwr\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.030834 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-scripts\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.086641 5001 generic.go:334] "Generic (PLEG): container finished" podID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" exitCode=137 Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.086688 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerDied","Data":"bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d"} Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.086727 5001 scope.go:117] "RemoveContainer" containerID="190631bee855183db0799a7c00d83ec04e8b2c5ce1213c0a7854d259684d38b3" Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.117244 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d is running failed: container process not found" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.119784 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d is running failed: container process not found" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.120309 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d is running failed: container process not found" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.120344 5001 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d is running failed: container process not found" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.132144 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-config-data\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.132204 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dwr\" (UniqueName: \"kubernetes.io/projected/c91ec979-f8eb-45b2-af41-a2040b954d89-kube-api-access-76dwr\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.132245 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-scripts\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.140522 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-config-data\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.150950 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.160947 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-scripts\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.162347 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.164792 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dwr\" (UniqueName: \"kubernetes.io/projected/c91ec979-f8eb-45b2-af41-a2040b954d89-kube-api-access-76dwr\") pod \"nova-kuttl-cell0-cell-mapping-nsw78\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.168593 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.176026 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.193871 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.196139 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.203921 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.233612 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.233673 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xvl\" (UniqueName: \"kubernetes.io/projected/1b94ad6a-ac77-455c-a73f-a9a047f5d714-kube-api-access-92xvl\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1b94ad6a-ac77-455c-a73f-a9a047f5d714\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.233702 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-logs\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.233737 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b94ad6a-ac77-455c-a73f-a9a047f5d714-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1b94ad6a-ac77-455c-a73f-a9a047f5d714\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.233821 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rj9\" (UniqueName: \"kubernetes.io/projected/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-kube-api-access-p4rj9\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.242288 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.253311 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.266871 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.269855 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.270803 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.270836 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.270875 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.270881 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.271286 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.271302 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.271309 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: E0128 17:47:09.271596 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.271605 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.272597 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.275419 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.279537 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.288227 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.299784 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.306782 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336106 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxfhf\" (UniqueName: \"kubernetes.io/projected/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-kube-api-access-hxfhf\") pod \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336254 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data\") pod \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\" (UID: \"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed\") " Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336520 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff698708-972b-42d1-8a81-b45d26ad98fb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336581 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffe931d-0fc4-4a64-83c3-6d915bc21100-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336613 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rj9\" (UniqueName: \"kubernetes.io/projected/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-kube-api-access-p4rj9\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336687 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jfnt\" (UniqueName: \"kubernetes.io/projected/3ffe931d-0fc4-4a64-83c3-6d915bc21100-kube-api-access-2jfnt\") pod \"nova-kuttl-scheduler-0\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336769 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336818 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xvl\" (UniqueName: \"kubernetes.io/projected/1b94ad6a-ac77-455c-a73f-a9a047f5d714-kube-api-access-92xvl\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1b94ad6a-ac77-455c-a73f-a9a047f5d714\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336837 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff698708-972b-42d1-8a81-b45d26ad98fb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336854 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-logs\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336897 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2zl\" (UniqueName: \"kubernetes.io/projected/ff698708-972b-42d1-8a81-b45d26ad98fb-kube-api-access-qw2zl\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.336922 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b94ad6a-ac77-455c-a73f-a9a047f5d714-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1b94ad6a-ac77-455c-a73f-a9a047f5d714\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.337794 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-logs\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.340915 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.343407 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b94ad6a-ac77-455c-a73f-a9a047f5d714-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1b94ad6a-ac77-455c-a73f-a9a047f5d714\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.345352 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.351762 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-kube-api-access-hxfhf" (OuterVolumeSpecName: "kube-api-access-hxfhf") pod "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed"). InnerVolumeSpecName "kube-api-access-hxfhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.355238 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xvl\" (UniqueName: \"kubernetes.io/projected/1b94ad6a-ac77-455c-a73f-a9a047f5d714-kube-api-access-92xvl\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"1b94ad6a-ac77-455c-a73f-a9a047f5d714\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.364599 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rj9\" (UniqueName: \"kubernetes.io/projected/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-kube-api-access-p4rj9\") pod \"nova-kuttl-api-0\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.366475 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data" (OuterVolumeSpecName: "config-data") pod "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" (UID: "7d7efcb6-bbc3-4a64-83ec-66b36aea0fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.438807 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff698708-972b-42d1-8a81-b45d26ad98fb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.438848 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffe931d-0fc4-4a64-83c3-6d915bc21100-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.438878 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jfnt\" (UniqueName: \"kubernetes.io/projected/3ffe931d-0fc4-4a64-83c3-6d915bc21100-kube-api-access-2jfnt\") pod \"nova-kuttl-scheduler-0\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.438940 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff698708-972b-42d1-8a81-b45d26ad98fb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.438960 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2zl\" (UniqueName: \"kubernetes.io/projected/ff698708-972b-42d1-8a81-b45d26ad98fb-kube-api-access-qw2zl\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.439034 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.439046 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxfhf\" (UniqueName: \"kubernetes.io/projected/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed-kube-api-access-hxfhf\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.440934 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff698708-972b-42d1-8a81-b45d26ad98fb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.444353 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffe931d-0fc4-4a64-83c3-6d915bc21100-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.445075 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff698708-972b-42d1-8a81-b45d26ad98fb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.456930 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2zl\" (UniqueName: \"kubernetes.io/projected/ff698708-972b-42d1-8a81-b45d26ad98fb-kube-api-access-qw2zl\") pod \"nova-kuttl-metadata-0\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.458143 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jfnt\" (UniqueName: \"kubernetes.io/projected/3ffe931d-0fc4-4a64-83c3-6d915bc21100-kube-api-access-2jfnt\") pod \"nova-kuttl-scheduler-0\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.553523 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.620219 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.635935 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.642605 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.764448 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78"] Jan 28 17:47:09 crc kubenswrapper[5001]: W0128 17:47:09.769547 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc91ec979_f8eb_45b2_af41_a2040b954d89.slice/crio-438066e5110c55f5f1f310011acde11d36aef856a80c2dafdd8379b5f8e95229 WatchSource:0}: Error finding container 438066e5110c55f5f1f310011acde11d36aef856a80c2dafdd8379b5f8e95229: Status 404 returned error can't find the container with id 438066e5110c55f5f1f310011acde11d36aef856a80c2dafdd8379b5f8e95229 Jan 28 17:47:09 crc kubenswrapper[5001]: I0128 17:47:09.981028 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.002950 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9"] Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.004055 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.006996 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.007558 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.020012 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9"] Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.074156 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:10 crc kubenswrapper[5001]: W0128 17:47:10.074659 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff698708_972b_42d1_8a81_b45d26ad98fb.slice/crio-4f2440e36338488e6a9b966f630a01b143990eb0d633ffb10a0fbdcf80ff952a WatchSource:0}: Error finding container 4f2440e36338488e6a9b966f630a01b143990eb0d633ffb10a0fbdcf80ff952a: Status 404 returned error can't find the container with id 4f2440e36338488e6a9b966f630a01b143990eb0d633ffb10a0fbdcf80ff952a Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.103854 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"1b94ad6a-ac77-455c-a73f-a9a047f5d714","Type":"ContainerStarted","Data":"c6801e3cfdd4b4b9c95d91148ac2d6afac94009f6a90b53805a2d4a1ccee34ed"} Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.106313 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.106830 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"7d7efcb6-bbc3-4a64-83ec-66b36aea0fed","Type":"ContainerDied","Data":"5c5100ca8c408cdc1ae25b8a44cb3ee76e7441c53e977faacaef53391516cd37"} Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.106913 5001 scope.go:117] "RemoveContainer" containerID="bb7b70cbdb8e4e026eb9331a6cde1d7f676f59dc6ee21a3a1eb036ffe4e7bc6d" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.107836 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ff698708-972b-42d1-8a81-b45d26ad98fb","Type":"ContainerStarted","Data":"4f2440e36338488e6a9b966f630a01b143990eb0d633ffb10a0fbdcf80ff952a"} Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.109468 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" event={"ID":"c91ec979-f8eb-45b2-af41-a2040b954d89","Type":"ContainerStarted","Data":"7c7410b8f82320e6c4c83a9f4d1215b7063c2bce5c83f9ec8bf8ad9c7bbcde49"} Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.109660 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" event={"ID":"c91ec979-f8eb-45b2-af41-a2040b954d89","Type":"ContainerStarted","Data":"438066e5110c55f5f1f310011acde11d36aef856a80c2dafdd8379b5f8e95229"} Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.150468 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4wq\" (UniqueName: \"kubernetes.io/projected/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-kube-api-access-mt4wq\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.150522 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.150564 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.152239 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.165569 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.174462 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:10 crc kubenswrapper[5001]: W0128 17:47:10.187377 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e761974_8ac8_41c8_8a4f_a94b00fad2a4.slice/crio-6d3312974edc139b256a45b77bde9764e762876d37c627e27d23f36317fec743 WatchSource:0}: Error finding container 6d3312974edc139b256a45b77bde9764e762876d37c627e27d23f36317fec743: Status 404 returned error can't find the container with id 6d3312974edc139b256a45b77bde9764e762876d37c627e27d23f36317fec743 Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.251449 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.252282 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.252364 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.252476 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4wq\" (UniqueName: \"kubernetes.io/projected/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-kube-api-access-mt4wq\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.256970 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.258594 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: W0128 17:47:10.269207 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ffe931d_0fc4_4a64_83c3_6d915bc21100.slice/crio-e11bae48ac0318cc6f1ea0f0dbe6487f181924f3b5c2d58226ccbb86a9de160a WatchSource:0}: Error finding container e11bae48ac0318cc6f1ea0f0dbe6487f181924f3b5c2d58226ccbb86a9de160a: Status 404 returned error can't find the container with id e11bae48ac0318cc6f1ea0f0dbe6487f181924f3b5c2d58226ccbb86a9de160a Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.275528 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4wq\" (UniqueName: \"kubernetes.io/projected/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-kube-api-access-mt4wq\") pod \"nova-kuttl-cell1-conductor-db-sync-p5mk9\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.428634 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.604271 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7efcb6-bbc3-4a64-83ec-66b36aea0fed" path="/var/lib/kubelet/pods/7d7efcb6-bbc3-4a64-83ec-66b36aea0fed/volumes" Jan 28 17:47:10 crc kubenswrapper[5001]: I0128 17:47:10.872853 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9"] Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.127020 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" event={"ID":"b6c2af0c-5c66-4f40-b9b4-10b4efec408a","Type":"ContainerStarted","Data":"c5945a10d5620d46b22e4d2495db5f14ebf539eb7324c0c3bf22cd0ec8850ef6"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.127351 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" event={"ID":"b6c2af0c-5c66-4f40-b9b4-10b4efec408a","Type":"ContainerStarted","Data":"2e3db7e546e44e2f6b2cf0bdcebaa7fd1d7ce3c54b4dbd7f864cf1f64581378e"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.133784 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8e761974-8ac8-41c8-8a4f-a94b00fad2a4","Type":"ContainerStarted","Data":"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.133831 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8e761974-8ac8-41c8-8a4f-a94b00fad2a4","Type":"ContainerStarted","Data":"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.133844 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8e761974-8ac8-41c8-8a4f-a94b00fad2a4","Type":"ContainerStarted","Data":"6d3312974edc139b256a45b77bde9764e762876d37c627e27d23f36317fec743"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.138808 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ff698708-972b-42d1-8a81-b45d26ad98fb","Type":"ContainerStarted","Data":"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.138881 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ff698708-972b-42d1-8a81-b45d26ad98fb","Type":"ContainerStarted","Data":"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.142055 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3ffe931d-0fc4-4a64-83c3-6d915bc21100","Type":"ContainerStarted","Data":"d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.142125 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3ffe931d-0fc4-4a64-83c3-6d915bc21100","Type":"ContainerStarted","Data":"e11bae48ac0318cc6f1ea0f0dbe6487f181924f3b5c2d58226ccbb86a9de160a"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.147063 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"1b94ad6a-ac77-455c-a73f-a9a047f5d714","Type":"ContainerStarted","Data":"d94b504fc012a799e0a3ef7970655c45e7952e342db9f687ea195610234c2658"} Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.149774 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" podStartSLOduration=2.149760074 podStartE2EDuration="2.149760074s" podCreationTimestamp="2026-01-28 17:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:11.146372256 +0000 UTC m=+1877.314160486" watchObservedRunningTime="2026-01-28 17:47:11.149760074 +0000 UTC m=+1877.317548304" Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.165728 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.165713264 podStartE2EDuration="2.165713264s" podCreationTimestamp="2026-01-28 17:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:11.163320605 +0000 UTC m=+1877.331108835" watchObservedRunningTime="2026-01-28 17:47:11.165713264 +0000 UTC m=+1877.333501494" Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.191424 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.191405015 podStartE2EDuration="2.191405015s" podCreationTimestamp="2026-01-28 17:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:11.182119258 +0000 UTC m=+1877.349907488" watchObservedRunningTime="2026-01-28 17:47:11.191405015 +0000 UTC m=+1877.359193245" Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.201948 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" podStartSLOduration=3.201933889 podStartE2EDuration="3.201933889s" podCreationTimestamp="2026-01-28 17:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:11.198948383 +0000 UTC m=+1877.366736613" watchObservedRunningTime="2026-01-28 17:47:11.201933889 +0000 UTC m=+1877.369722119" Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.225120 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.225102568 podStartE2EDuration="2.225102568s" podCreationTimestamp="2026-01-28 17:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:11.215046998 +0000 UTC m=+1877.382835228" watchObservedRunningTime="2026-01-28 17:47:11.225102568 +0000 UTC m=+1877.392890798" Jan 28 17:47:11 crc kubenswrapper[5001]: I0128 17:47:11.245216 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.245191927 podStartE2EDuration="2.245191927s" podCreationTimestamp="2026-01-28 17:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:11.231744999 +0000 UTC m=+1877.399533239" watchObservedRunningTime="2026-01-28 17:47:11.245191927 +0000 UTC m=+1877.412980157" Jan 28 17:47:14 crc kubenswrapper[5001]: I0128 17:47:14.187031 5001 generic.go:334] "Generic (PLEG): container finished" podID="b6c2af0c-5c66-4f40-b9b4-10b4efec408a" containerID="c5945a10d5620d46b22e4d2495db5f14ebf539eb7324c0c3bf22cd0ec8850ef6" exitCode=0 Jan 28 17:47:14 crc kubenswrapper[5001]: I0128 17:47:14.187356 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" event={"ID":"b6c2af0c-5c66-4f40-b9b4-10b4efec408a","Type":"ContainerDied","Data":"c5945a10d5620d46b22e4d2495db5f14ebf539eb7324c0c3bf22cd0ec8850ef6"} Jan 28 17:47:14 crc kubenswrapper[5001]: I0128 17:47:14.554609 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:14 crc kubenswrapper[5001]: I0128 17:47:14.636538 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:14 crc kubenswrapper[5001]: I0128 17:47:14.636591 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:14 crc kubenswrapper[5001]: I0128 17:47:14.642903 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.202835 5001 generic.go:334] "Generic (PLEG): container finished" podID="c91ec979-f8eb-45b2-af41-a2040b954d89" containerID="7c7410b8f82320e6c4c83a9f4d1215b7063c2bce5c83f9ec8bf8ad9c7bbcde49" exitCode=0 Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.202959 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" event={"ID":"c91ec979-f8eb-45b2-af41-a2040b954d89","Type":"ContainerDied","Data":"7c7410b8f82320e6c4c83a9f4d1215b7063c2bce5c83f9ec8bf8ad9c7bbcde49"} Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.527067 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.594493 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.594796 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt4wq\" (UniqueName: \"kubernetes.io/projected/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-kube-api-access-mt4wq\") pod \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.594878 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-config-data\") pod \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.594952 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-scripts\") pod \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\" (UID: \"b6c2af0c-5c66-4f40-b9b4-10b4efec408a\") " Jan 28 17:47:15 crc kubenswrapper[5001]: E0128 17:47:15.595202 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.599810 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-kube-api-access-mt4wq" (OuterVolumeSpecName: "kube-api-access-mt4wq") pod "b6c2af0c-5c66-4f40-b9b4-10b4efec408a" (UID: "b6c2af0c-5c66-4f40-b9b4-10b4efec408a"). InnerVolumeSpecName "kube-api-access-mt4wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.600322 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-scripts" (OuterVolumeSpecName: "scripts") pod "b6c2af0c-5c66-4f40-b9b4-10b4efec408a" (UID: "b6c2af0c-5c66-4f40-b9b4-10b4efec408a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.616465 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-config-data" (OuterVolumeSpecName: "config-data") pod "b6c2af0c-5c66-4f40-b9b4-10b4efec408a" (UID: "b6c2af0c-5c66-4f40-b9b4-10b4efec408a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.698075 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt4wq\" (UniqueName: \"kubernetes.io/projected/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-kube-api-access-mt4wq\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.698110 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:15 crc kubenswrapper[5001]: I0128 17:47:15.698128 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c2af0c-5c66-4f40-b9b4-10b4efec408a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.217335 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" event={"ID":"b6c2af0c-5c66-4f40-b9b4-10b4efec408a","Type":"ContainerDied","Data":"2e3db7e546e44e2f6b2cf0bdcebaa7fd1d7ce3c54b4dbd7f864cf1f64581378e"} Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.217416 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.217413 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e3db7e546e44e2f6b2cf0bdcebaa7fd1d7ce3c54b4dbd7f864cf1f64581378e" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.301046 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:47:16 crc kubenswrapper[5001]: E0128 17:47:16.301681 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c2af0c-5c66-4f40-b9b4-10b4efec408a" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.301700 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c2af0c-5c66-4f40-b9b4-10b4efec408a" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.301864 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c2af0c-5c66-4f40-b9b4-10b4efec408a" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.302446 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.306132 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.315667 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.413636 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e53dae-9c1d-442b-b282-a377730ba93a-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"95e53dae-9c1d-442b-b282-a377730ba93a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.413689 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbvn\" (UniqueName: \"kubernetes.io/projected/95e53dae-9c1d-442b-b282-a377730ba93a-kube-api-access-vwbvn\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"95e53dae-9c1d-442b-b282-a377730ba93a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.515871 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e53dae-9c1d-442b-b282-a377730ba93a-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"95e53dae-9c1d-442b-b282-a377730ba93a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.515915 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbvn\" (UniqueName: \"kubernetes.io/projected/95e53dae-9c1d-442b-b282-a377730ba93a-kube-api-access-vwbvn\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"95e53dae-9c1d-442b-b282-a377730ba93a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.520945 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e53dae-9c1d-442b-b282-a377730ba93a-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"95e53dae-9c1d-442b-b282-a377730ba93a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.531342 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbvn\" (UniqueName: \"kubernetes.io/projected/95e53dae-9c1d-442b-b282-a377730ba93a-kube-api-access-vwbvn\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"95e53dae-9c1d-442b-b282-a377730ba93a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.568261 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.616730 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76dwr\" (UniqueName: \"kubernetes.io/projected/c91ec979-f8eb-45b2-af41-a2040b954d89-kube-api-access-76dwr\") pod \"c91ec979-f8eb-45b2-af41-a2040b954d89\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.616789 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-config-data\") pod \"c91ec979-f8eb-45b2-af41-a2040b954d89\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.616824 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-scripts\") pod \"c91ec979-f8eb-45b2-af41-a2040b954d89\" (UID: \"c91ec979-f8eb-45b2-af41-a2040b954d89\") " Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.620113 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-scripts" (OuterVolumeSpecName: "scripts") pod "c91ec979-f8eb-45b2-af41-a2040b954d89" (UID: "c91ec979-f8eb-45b2-af41-a2040b954d89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.621162 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91ec979-f8eb-45b2-af41-a2040b954d89-kube-api-access-76dwr" (OuterVolumeSpecName: "kube-api-access-76dwr") pod "c91ec979-f8eb-45b2-af41-a2040b954d89" (UID: "c91ec979-f8eb-45b2-af41-a2040b954d89"). InnerVolumeSpecName "kube-api-access-76dwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.624134 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.647068 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-config-data" (OuterVolumeSpecName: "config-data") pod "c91ec979-f8eb-45b2-af41-a2040b954d89" (UID: "c91ec979-f8eb-45b2-af41-a2040b954d89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.719090 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76dwr\" (UniqueName: \"kubernetes.io/projected/c91ec979-f8eb-45b2-af41-a2040b954d89-kube-api-access-76dwr\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.719396 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:16 crc kubenswrapper[5001]: I0128 17:47:16.719405 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91ec979-f8eb-45b2-af41-a2040b954d89-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.055011 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.227460 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" event={"ID":"c91ec979-f8eb-45b2-af41-a2040b954d89","Type":"ContainerDied","Data":"438066e5110c55f5f1f310011acde11d36aef856a80c2dafdd8379b5f8e95229"} Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.227506 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438066e5110c55f5f1f310011acde11d36aef856a80c2dafdd8379b5f8e95229" Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.227572 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78" Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.231778 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"95e53dae-9c1d-442b-b282-a377730ba93a","Type":"ContainerStarted","Data":"f344c55fece55e3b55747298fa080c538f36d2ffe9a5966664664b7ca4b0e22f"} Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.231933 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.252263 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.252241126 podStartE2EDuration="1.252241126s" podCreationTimestamp="2026-01-28 17:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:17.246242184 +0000 UTC m=+1883.414030434" watchObservedRunningTime="2026-01-28 17:47:17.252241126 +0000 UTC m=+1883.420029366" Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.565574 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.566183 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-log" containerID="cri-o://fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db" gracePeriod=30 Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.566283 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-api" containerID="cri-o://4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e" gracePeriod=30 Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.578848 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.579200 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3ffe931d-0fc4-4a64-83c3-6d915bc21100" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0" gracePeriod=30 Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.720949 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.721228 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-log" containerID="cri-o://f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752" gracePeriod=30 Jan 28 17:47:17 crc kubenswrapper[5001]: I0128 17:47:17.721325 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704" gracePeriod=30 Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.063554 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.146751 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-config-data\") pod \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.147170 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rj9\" (UniqueName: \"kubernetes.io/projected/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-kube-api-access-p4rj9\") pod \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.147228 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-logs\") pod \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\" (UID: \"8e761974-8ac8-41c8-8a4f-a94b00fad2a4\") " Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.147736 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-logs" (OuterVolumeSpecName: "logs") pod "8e761974-8ac8-41c8-8a4f-a94b00fad2a4" (UID: "8e761974-8ac8-41c8-8a4f-a94b00fad2a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.152062 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-kube-api-access-p4rj9" (OuterVolumeSpecName: "kube-api-access-p4rj9") pod "8e761974-8ac8-41c8-8a4f-a94b00fad2a4" (UID: "8e761974-8ac8-41c8-8a4f-a94b00fad2a4"). InnerVolumeSpecName "kube-api-access-p4rj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.172914 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-config-data" (OuterVolumeSpecName: "config-data") pod "8e761974-8ac8-41c8-8a4f-a94b00fad2a4" (UID: "8e761974-8ac8-41c8-8a4f-a94b00fad2a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.185138 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.246960 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"95e53dae-9c1d-442b-b282-a377730ba93a","Type":"ContainerStarted","Data":"a6023292895cb73d539cd858aab3d9b8fdc6629a26dafc5fd8b62189553d74d7"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.248228 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2zl\" (UniqueName: \"kubernetes.io/projected/ff698708-972b-42d1-8a81-b45d26ad98fb-kube-api-access-qw2zl\") pod \"ff698708-972b-42d1-8a81-b45d26ad98fb\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.248636 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff698708-972b-42d1-8a81-b45d26ad98fb-config-data\") pod \"ff698708-972b-42d1-8a81-b45d26ad98fb\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.248750 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff698708-972b-42d1-8a81-b45d26ad98fb-logs\") pod \"ff698708-972b-42d1-8a81-b45d26ad98fb\" (UID: \"ff698708-972b-42d1-8a81-b45d26ad98fb\") " Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249115 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff698708-972b-42d1-8a81-b45d26ad98fb-logs" (OuterVolumeSpecName: "logs") pod "ff698708-972b-42d1-8a81-b45d26ad98fb" (UID: "ff698708-972b-42d1-8a81-b45d26ad98fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249298 5001 generic.go:334] "Generic (PLEG): container finished" podID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerID="094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704" exitCode=0 Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249329 5001 generic.go:334] "Generic (PLEG): container finished" podID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerID="f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752" exitCode=143 Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249379 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ff698708-972b-42d1-8a81-b45d26ad98fb","Type":"ContainerDied","Data":"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249401 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ff698708-972b-42d1-8a81-b45d26ad98fb","Type":"ContainerDied","Data":"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249411 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249426 5001 scope.go:117] "RemoveContainer" containerID="094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249462 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249483 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rj9\" (UniqueName: \"kubernetes.io/projected/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-kube-api-access-p4rj9\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249497 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e761974-8ac8-41c8-8a4f-a94b00fad2a4-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249508 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff698708-972b-42d1-8a81-b45d26ad98fb-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.249414 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ff698708-972b-42d1-8a81-b45d26ad98fb","Type":"ContainerDied","Data":"4f2440e36338488e6a9b966f630a01b143990eb0d633ffb10a0fbdcf80ff952a"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.250758 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff698708-972b-42d1-8a81-b45d26ad98fb-kube-api-access-qw2zl" (OuterVolumeSpecName: "kube-api-access-qw2zl") pod "ff698708-972b-42d1-8a81-b45d26ad98fb" (UID: "ff698708-972b-42d1-8a81-b45d26ad98fb"). InnerVolumeSpecName "kube-api-access-qw2zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.252884 5001 generic.go:334] "Generic (PLEG): container finished" podID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerID="4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e" exitCode=0 Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.252904 5001 generic.go:334] "Generic (PLEG): container finished" podID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerID="fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db" exitCode=143 Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.252919 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8e761974-8ac8-41c8-8a4f-a94b00fad2a4","Type":"ContainerDied","Data":"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.252935 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8e761974-8ac8-41c8-8a4f-a94b00fad2a4","Type":"ContainerDied","Data":"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.252945 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8e761974-8ac8-41c8-8a4f-a94b00fad2a4","Type":"ContainerDied","Data":"6d3312974edc139b256a45b77bde9764e762876d37c627e27d23f36317fec743"} Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.253023 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.268862 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff698708-972b-42d1-8a81-b45d26ad98fb-config-data" (OuterVolumeSpecName: "config-data") pod "ff698708-972b-42d1-8a81-b45d26ad98fb" (UID: "ff698708-972b-42d1-8a81-b45d26ad98fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.282968 5001 scope.go:117] "RemoveContainer" containerID="f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.324354 5001 scope.go:117] "RemoveContainer" containerID="094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.325528 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704\": container with ID starting with 094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704 not found: ID does not exist" containerID="094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.325644 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704"} err="failed to get container status \"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704\": rpc error: code = NotFound desc = could not find container \"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704\": container with ID starting with 094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704 not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.325726 5001 scope.go:117] "RemoveContainer" containerID="f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.329114 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752\": container with ID starting with f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752 not found: ID does not exist" containerID="f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.329163 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752"} err="failed to get container status \"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752\": rpc error: code = NotFound desc = could not find container \"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752\": container with ID starting with f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752 not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.329196 5001 scope.go:117] "RemoveContainer" containerID="094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.329660 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704"} err="failed to get container status \"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704\": rpc error: code = NotFound desc = could not find container \"094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704\": container with ID starting with 094987f2935bb8edc87b5d3de066ca5166bf4601c1441ac0aa8ec80f5c0e7704 not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.329709 5001 scope.go:117] "RemoveContainer" containerID="f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.330938 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752"} err="failed to get container status \"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752\": rpc error: code = NotFound desc = could not find container \"f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752\": container with ID starting with f1a27e1bcfa3fe7d56fa5e2fad68d609ffb866f990c3288752f44f21f200d752 not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.330965 5001 scope.go:117] "RemoveContainer" containerID="4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.335276 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.344244 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.351376 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2zl\" (UniqueName: \"kubernetes.io/projected/ff698708-972b-42d1-8a81-b45d26ad98fb-kube-api-access-qw2zl\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.351411 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff698708-972b-42d1-8a81-b45d26ad98fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.355104 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.356867 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-log" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.356893 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-log" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.356910 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-log" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.356922 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-log" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.356934 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-metadata" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.357124 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-metadata" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.357153 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-api" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.357165 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-api" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.357180 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91ec979-f8eb-45b2-af41-a2040b954d89" containerName="nova-manage" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.357188 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91ec979-f8eb-45b2-af41-a2040b954d89" containerName="nova-manage" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.358326 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-metadata" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.358349 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-api" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.358364 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91ec979-f8eb-45b2-af41-a2040b954d89" containerName="nova-manage" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.358378 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" containerName="nova-kuttl-metadata-log" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.358392 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" containerName="nova-kuttl-api-log" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.359675 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.364457 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.386112 5001 scope.go:117] "RemoveContainer" containerID="fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.419839 5001 scope.go:117] "RemoveContainer" containerID="4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e" Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.422590 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e\": container with ID starting with 4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e not found: ID does not exist" containerID="4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.422627 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e"} err="failed to get container status \"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e\": rpc error: code = NotFound desc = could not find container \"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e\": container with ID starting with 4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.422655 5001 scope.go:117] "RemoveContainer" containerID="fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.422719 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: E0128 17:47:18.424446 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db\": container with ID starting with fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db not found: ID does not exist" containerID="fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.424475 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db"} err="failed to get container status \"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db\": rpc error: code = NotFound desc = could not find container \"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db\": container with ID starting with fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.424495 5001 scope.go:117] "RemoveContainer" containerID="4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.424875 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e"} err="failed to get container status \"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e\": rpc error: code = NotFound desc = could not find container \"4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e\": container with ID starting with 4383abe711e27d304f2ed028f244535e265d329305a74f2c074a81e89f77062e not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.424899 5001 scope.go:117] "RemoveContainer" containerID="fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.425363 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db"} err="failed to get container status \"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db\": rpc error: code = NotFound desc = could not find container \"fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db\": container with ID starting with fd3ae447f7ac41f04288bfefbf1b931f3108221629f205a6287a4ac7aeca14db not found: ID does not exist" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.452872 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0e02a-650f-44c5-972d-502b035cdf12-config-data\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.452997 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4llj8\" (UniqueName: \"kubernetes.io/projected/63a0e02a-650f-44c5-972d-502b035cdf12-kube-api-access-4llj8\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.453066 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0e02a-650f-44c5-972d-502b035cdf12-logs\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.554100 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0e02a-650f-44c5-972d-502b035cdf12-logs\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.554171 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0e02a-650f-44c5-972d-502b035cdf12-config-data\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.554224 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4llj8\" (UniqueName: \"kubernetes.io/projected/63a0e02a-650f-44c5-972d-502b035cdf12-kube-api-access-4llj8\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.554862 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0e02a-650f-44c5-972d-502b035cdf12-logs\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.559130 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0e02a-650f-44c5-972d-502b035cdf12-config-data\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.571375 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4llj8\" (UniqueName: \"kubernetes.io/projected/63a0e02a-650f-44c5-972d-502b035cdf12-kube-api-access-4llj8\") pod \"nova-kuttl-api-0\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.605323 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e761974-8ac8-41c8-8a4f-a94b00fad2a4" path="/var/lib/kubelet/pods/8e761974-8ac8-41c8-8a4f-a94b00fad2a4/volumes" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.632533 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.641489 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.659949 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.661556 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.664132 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.676071 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.687845 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.757057 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.757174 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjt5j\" (UniqueName: \"kubernetes.io/projected/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-kube-api-access-rjt5j\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.757303 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.858617 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.858993 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.859057 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjt5j\" (UniqueName: \"kubernetes.io/projected/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-kube-api-access-rjt5j\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.859542 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.862920 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.877060 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjt5j\" (UniqueName: \"kubernetes.io/projected/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-kube-api-access-rjt5j\") pod \"nova-kuttl-metadata-0\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:18 crc kubenswrapper[5001]: I0128 17:47:18.975133 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:19 crc kubenswrapper[5001]: I0128 17:47:19.100385 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:19 crc kubenswrapper[5001]: W0128 17:47:19.101724 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a0e02a_650f_44c5_972d_502b035cdf12.slice/crio-6bf1b0341c5870b81d4b27fa8733d2e36772a0282910d0b082946f8955259abe WatchSource:0}: Error finding container 6bf1b0341c5870b81d4b27fa8733d2e36772a0282910d0b082946f8955259abe: Status 404 returned error can't find the container with id 6bf1b0341c5870b81d4b27fa8733d2e36772a0282910d0b082946f8955259abe Jan 28 17:47:19 crc kubenswrapper[5001]: I0128 17:47:19.263356 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63a0e02a-650f-44c5-972d-502b035cdf12","Type":"ContainerStarted","Data":"6bf1b0341c5870b81d4b27fa8733d2e36772a0282910d0b082946f8955259abe"} Jan 28 17:47:19 crc kubenswrapper[5001]: I0128 17:47:19.434937 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:19 crc kubenswrapper[5001]: W0128 17:47:19.442266 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4bc89c8_e39a_49e0_a1d5_53f54472ad24.slice/crio-e3d268250b8240613a53061b696bd6de3c6d34700c4fc49b3f3b79a4fe493d14 WatchSource:0}: Error finding container e3d268250b8240613a53061b696bd6de3c6d34700c4fc49b3f3b79a4fe493d14: Status 404 returned error can't find the container with id e3d268250b8240613a53061b696bd6de3c6d34700c4fc49b3f3b79a4fe493d14 Jan 28 17:47:19 crc kubenswrapper[5001]: I0128 17:47:19.554019 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:19 crc kubenswrapper[5001]: I0128 17:47:19.568244 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.275444 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63a0e02a-650f-44c5-972d-502b035cdf12","Type":"ContainerStarted","Data":"4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592"} Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.275492 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63a0e02a-650f-44c5-972d-502b035cdf12","Type":"ContainerStarted","Data":"761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818"} Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.276900 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"c4bc89c8-e39a-49e0-a1d5-53f54472ad24","Type":"ContainerStarted","Data":"6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192"} Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.276946 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"c4bc89c8-e39a-49e0-a1d5-53f54472ad24","Type":"ContainerStarted","Data":"0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a"} Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.276956 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"c4bc89c8-e39a-49e0-a1d5-53f54472ad24","Type":"ContainerStarted","Data":"e3d268250b8240613a53061b696bd6de3c6d34700c4fc49b3f3b79a4fe493d14"} Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.286241 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.295541 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.295525793 podStartE2EDuration="2.295525793s" podCreationTimestamp="2026-01-28 17:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:20.294167264 +0000 UTC m=+1886.461955604" watchObservedRunningTime="2026-01-28 17:47:20.295525793 +0000 UTC m=+1886.463314023" Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.313727 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.313706158 podStartE2EDuration="2.313706158s" podCreationTimestamp="2026-01-28 17:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:20.310963659 +0000 UTC m=+1886.478751899" watchObservedRunningTime="2026-01-28 17:47:20.313706158 +0000 UTC m=+1886.481494388" Jan 28 17:47:20 crc kubenswrapper[5001]: I0128 17:47:20.624507 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff698708-972b-42d1-8a81-b45d26ad98fb" path="/var/lib/kubelet/pods/ff698708-972b-42d1-8a81-b45d26ad98fb/volumes" Jan 28 17:47:21 crc kubenswrapper[5001]: I0128 17:47:21.920872 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.051847 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jfnt\" (UniqueName: \"kubernetes.io/projected/3ffe931d-0fc4-4a64-83c3-6d915bc21100-kube-api-access-2jfnt\") pod \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.052074 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffe931d-0fc4-4a64-83c3-6d915bc21100-config-data\") pod \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\" (UID: \"3ffe931d-0fc4-4a64-83c3-6d915bc21100\") " Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.058669 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffe931d-0fc4-4a64-83c3-6d915bc21100-kube-api-access-2jfnt" (OuterVolumeSpecName: "kube-api-access-2jfnt") pod "3ffe931d-0fc4-4a64-83c3-6d915bc21100" (UID: "3ffe931d-0fc4-4a64-83c3-6d915bc21100"). InnerVolumeSpecName "kube-api-access-2jfnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.075362 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffe931d-0fc4-4a64-83c3-6d915bc21100-config-data" (OuterVolumeSpecName: "config-data") pod "3ffe931d-0fc4-4a64-83c3-6d915bc21100" (UID: "3ffe931d-0fc4-4a64-83c3-6d915bc21100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.153944 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jfnt\" (UniqueName: \"kubernetes.io/projected/3ffe931d-0fc4-4a64-83c3-6d915bc21100-kube-api-access-2jfnt\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.154017 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffe931d-0fc4-4a64-83c3-6d915bc21100-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.425556 5001 generic.go:334] "Generic (PLEG): container finished" podID="3ffe931d-0fc4-4a64-83c3-6d915bc21100" containerID="d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0" exitCode=0 Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.425629 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3ffe931d-0fc4-4a64-83c3-6d915bc21100","Type":"ContainerDied","Data":"d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0"} Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.425646 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.425680 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3ffe931d-0fc4-4a64-83c3-6d915bc21100","Type":"ContainerDied","Data":"e11bae48ac0318cc6f1ea0f0dbe6487f181924f3b5c2d58226ccbb86a9de160a"} Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.425713 5001 scope.go:117] "RemoveContainer" containerID="d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.453627 5001 scope.go:117] "RemoveContainer" containerID="d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0" Jan 28 17:47:22 crc kubenswrapper[5001]: E0128 17:47:22.454458 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0\": container with ID starting with d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0 not found: ID does not exist" containerID="d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.454524 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0"} err="failed to get container status \"d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0\": rpc error: code = NotFound desc = could not find container \"d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0\": container with ID starting with d7a7060f33d1ab8d91a13457b8c4d66a53756cb652bcdd4efc5729b9b0958dd0 not found: ID does not exist" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.493182 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.504142 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.511105 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:22 crc kubenswrapper[5001]: E0128 17:47:22.511558 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffe931d-0fc4-4a64-83c3-6d915bc21100" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.511575 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffe931d-0fc4-4a64-83c3-6d915bc21100" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.511790 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffe931d-0fc4-4a64-83c3-6d915bc21100" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.512534 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.518237 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.518909 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.607949 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffe931d-0fc4-4a64-83c3-6d915bc21100" path="/var/lib/kubelet/pods/3ffe931d-0fc4-4a64-83c3-6d915bc21100/volumes" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.662899 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.663003 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tzb\" (UniqueName: \"kubernetes.io/projected/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-kube-api-access-r4tzb\") pod \"nova-kuttl-scheduler-0\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.764353 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tzb\" (UniqueName: \"kubernetes.io/projected/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-kube-api-access-r4tzb\") pod \"nova-kuttl-scheduler-0\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.764478 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.770346 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.787952 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tzb\" (UniqueName: \"kubernetes.io/projected/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-kube-api-access-r4tzb\") pod \"nova-kuttl-scheduler-0\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:22 crc kubenswrapper[5001]: I0128 17:47:22.845382 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:23 crc kubenswrapper[5001]: I0128 17:47:23.273626 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:23 crc kubenswrapper[5001]: I0128 17:47:23.439096 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515","Type":"ContainerStarted","Data":"fad02df2aaa00233e0756a201f369267b6c20faef867d3feb060a3ce11ba2a48"} Jan 28 17:47:23 crc kubenswrapper[5001]: I0128 17:47:23.975401 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:23 crc kubenswrapper[5001]: I0128 17:47:23.975490 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:24 crc kubenswrapper[5001]: I0128 17:47:24.459462 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515","Type":"ContainerStarted","Data":"4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143"} Jan 28 17:47:24 crc kubenswrapper[5001]: I0128 17:47:24.477582 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.477557991 podStartE2EDuration="2.477557991s" podCreationTimestamp="2026-01-28 17:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:24.472862995 +0000 UTC m=+1890.640651235" watchObservedRunningTime="2026-01-28 17:47:24.477557991 +0000 UTC m=+1890.645346241" Jan 28 17:47:26 crc kubenswrapper[5001]: I0128 17:47:26.593900 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:47:26 crc kubenswrapper[5001]: E0128 17:47:26.594480 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:47:26 crc kubenswrapper[5001]: I0128 17:47:26.654036 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.101486 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm"] Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.102468 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.104686 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.107349 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.118485 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm"] Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.142698 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-config-data\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.142828 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpm9\" (UniqueName: \"kubernetes.io/projected/19d576b9-5be5-4988-a627-4d6b96e55a64-kube-api-access-kxpm9\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.143038 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-scripts\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.245237 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-scripts\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.245339 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-config-data\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.245407 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpm9\" (UniqueName: \"kubernetes.io/projected/19d576b9-5be5-4988-a627-4d6b96e55a64-kube-api-access-kxpm9\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.251992 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-scripts\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.252938 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-config-data\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.262546 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpm9\" (UniqueName: \"kubernetes.io/projected/19d576b9-5be5-4988-a627-4d6b96e55a64-kube-api-access-kxpm9\") pod \"nova-kuttl-cell1-cell-mapping-7zhhm\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.422834 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.845603 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:27 crc kubenswrapper[5001]: I0128 17:47:27.848645 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm"] Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.495552 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" event={"ID":"19d576b9-5be5-4988-a627-4d6b96e55a64","Type":"ContainerStarted","Data":"ba6f772a095a7aec34e99d19b02d196d35bbbf80633f19a39e35f4e504c1df9e"} Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.495609 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" event={"ID":"19d576b9-5be5-4988-a627-4d6b96e55a64","Type":"ContainerStarted","Data":"8e06b8402039740a3870f750349b397d2663393e2d840328691579ef96906cba"} Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.514251 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" podStartSLOduration=1.514236336 podStartE2EDuration="1.514236336s" podCreationTimestamp="2026-01-28 17:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:28.513294269 +0000 UTC m=+1894.681082499" watchObservedRunningTime="2026-01-28 17:47:28.514236336 +0000 UTC m=+1894.682024566" Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.688209 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.688272 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.976129 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:28 crc kubenswrapper[5001]: I0128 17:47:28.976395 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:29 crc kubenswrapper[5001]: I0128 17:47:29.771280 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:29 crc kubenswrapper[5001]: I0128 17:47:29.771264 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:30 crc kubenswrapper[5001]: I0128 17:47:30.059384 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.228:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:30 crc kubenswrapper[5001]: I0128 17:47:30.059917 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.228:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:32 crc kubenswrapper[5001]: I0128 17:47:32.846111 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:32 crc kubenswrapper[5001]: I0128 17:47:32.869646 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:33 crc kubenswrapper[5001]: I0128 17:47:33.544929 5001 generic.go:334] "Generic (PLEG): container finished" podID="19d576b9-5be5-4988-a627-4d6b96e55a64" containerID="ba6f772a095a7aec34e99d19b02d196d35bbbf80633f19a39e35f4e504c1df9e" exitCode=0 Jan 28 17:47:33 crc kubenswrapper[5001]: I0128 17:47:33.545056 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" event={"ID":"19d576b9-5be5-4988-a627-4d6b96e55a64","Type":"ContainerDied","Data":"ba6f772a095a7aec34e99d19b02d196d35bbbf80633f19a39e35f4e504c1df9e"} Jan 28 17:47:33 crc kubenswrapper[5001]: I0128 17:47:33.577928 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:34 crc kubenswrapper[5001]: I0128 17:47:34.886576 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:34 crc kubenswrapper[5001]: I0128 17:47:34.985868 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxpm9\" (UniqueName: \"kubernetes.io/projected/19d576b9-5be5-4988-a627-4d6b96e55a64-kube-api-access-kxpm9\") pod \"19d576b9-5be5-4988-a627-4d6b96e55a64\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " Jan 28 17:47:34 crc kubenswrapper[5001]: I0128 17:47:34.985964 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-scripts\") pod \"19d576b9-5be5-4988-a627-4d6b96e55a64\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " Jan 28 17:47:34 crc kubenswrapper[5001]: I0128 17:47:34.986048 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-config-data\") pod \"19d576b9-5be5-4988-a627-4d6b96e55a64\" (UID: \"19d576b9-5be5-4988-a627-4d6b96e55a64\") " Jan 28 17:47:34 crc kubenswrapper[5001]: I0128 17:47:34.992736 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-scripts" (OuterVolumeSpecName: "scripts") pod "19d576b9-5be5-4988-a627-4d6b96e55a64" (UID: "19d576b9-5be5-4988-a627-4d6b96e55a64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.006526 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d576b9-5be5-4988-a627-4d6b96e55a64-kube-api-access-kxpm9" (OuterVolumeSpecName: "kube-api-access-kxpm9") pod "19d576b9-5be5-4988-a627-4d6b96e55a64" (UID: "19d576b9-5be5-4988-a627-4d6b96e55a64"). InnerVolumeSpecName "kube-api-access-kxpm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.014152 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-config-data" (OuterVolumeSpecName: "config-data") pod "19d576b9-5be5-4988-a627-4d6b96e55a64" (UID: "19d576b9-5be5-4988-a627-4d6b96e55a64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.087362 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxpm9\" (UniqueName: \"kubernetes.io/projected/19d576b9-5be5-4988-a627-4d6b96e55a64-kube-api-access-kxpm9\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.087400 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.087414 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d576b9-5be5-4988-a627-4d6b96e55a64-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.564890 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" event={"ID":"19d576b9-5be5-4988-a627-4d6b96e55a64","Type":"ContainerDied","Data":"8e06b8402039740a3870f750349b397d2663393e2d840328691579ef96906cba"} Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.564926 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e06b8402039740a3870f750349b397d2663393e2d840328691579ef96906cba" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.564995 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm" Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.747962 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.748217 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-log" containerID="cri-o://4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592" gracePeriod=30 Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.748644 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-api" containerID="cri-o://761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818" gracePeriod=30 Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.771218 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.771771 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" gracePeriod=30 Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.908799 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.909090 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-log" containerID="cri-o://0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a" gracePeriod=30 Jan 28 17:47:35 crc kubenswrapper[5001]: I0128 17:47:35.909201 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192" gracePeriod=30 Jan 28 17:47:36 crc kubenswrapper[5001]: I0128 17:47:36.574346 5001 generic.go:334] "Generic (PLEG): container finished" podID="63a0e02a-650f-44c5-972d-502b035cdf12" containerID="4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592" exitCode=143 Jan 28 17:47:36 crc kubenswrapper[5001]: I0128 17:47:36.574416 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63a0e02a-650f-44c5-972d-502b035cdf12","Type":"ContainerDied","Data":"4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592"} Jan 28 17:47:36 crc kubenswrapper[5001]: I0128 17:47:36.576822 5001 generic.go:334] "Generic (PLEG): container finished" podID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerID="0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a" exitCode=143 Jan 28 17:47:36 crc kubenswrapper[5001]: I0128 17:47:36.576871 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"c4bc89c8-e39a-49e0-a1d5-53f54472ad24","Type":"ContainerDied","Data":"0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a"} Jan 28 17:47:37 crc kubenswrapper[5001]: E0128 17:47:37.847880 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:47:37 crc kubenswrapper[5001]: E0128 17:47:37.849348 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:47:37 crc kubenswrapper[5001]: E0128 17:47:37.850935 5001 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 17:47:37 crc kubenswrapper[5001]: E0128 17:47:37.851016 5001 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.309098 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.460233 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0e02a-650f-44c5-972d-502b035cdf12-config-data\") pod \"63a0e02a-650f-44c5-972d-502b035cdf12\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.460461 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0e02a-650f-44c5-972d-502b035cdf12-logs\") pod \"63a0e02a-650f-44c5-972d-502b035cdf12\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.460668 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4llj8\" (UniqueName: \"kubernetes.io/projected/63a0e02a-650f-44c5-972d-502b035cdf12-kube-api-access-4llj8\") pod \"63a0e02a-650f-44c5-972d-502b035cdf12\" (UID: \"63a0e02a-650f-44c5-972d-502b035cdf12\") " Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.461058 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a0e02a-650f-44c5-972d-502b035cdf12-logs" (OuterVolumeSpecName: "logs") pod "63a0e02a-650f-44c5-972d-502b035cdf12" (UID: "63a0e02a-650f-44c5-972d-502b035cdf12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.461634 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0e02a-650f-44c5-972d-502b035cdf12-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.466583 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a0e02a-650f-44c5-972d-502b035cdf12-kube-api-access-4llj8" (OuterVolumeSpecName: "kube-api-access-4llj8") pod "63a0e02a-650f-44c5-972d-502b035cdf12" (UID: "63a0e02a-650f-44c5-972d-502b035cdf12"). InnerVolumeSpecName "kube-api-access-4llj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.481419 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.499665 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a0e02a-650f-44c5-972d-502b035cdf12-config-data" (OuterVolumeSpecName: "config-data") pod "63a0e02a-650f-44c5-972d-502b035cdf12" (UID: "63a0e02a-650f-44c5-972d-502b035cdf12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.563662 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjt5j\" (UniqueName: \"kubernetes.io/projected/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-kube-api-access-rjt5j\") pod \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.563758 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-logs\") pod \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.563829 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-config-data\") pod \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\" (UID: \"c4bc89c8-e39a-49e0-a1d5-53f54472ad24\") " Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.564129 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4llj8\" (UniqueName: \"kubernetes.io/projected/63a0e02a-650f-44c5-972d-502b035cdf12-kube-api-access-4llj8\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.564149 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0e02a-650f-44c5-972d-502b035cdf12-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.568192 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-kube-api-access-rjt5j" (OuterVolumeSpecName: "kube-api-access-rjt5j") pod "c4bc89c8-e39a-49e0-a1d5-53f54472ad24" (UID: "c4bc89c8-e39a-49e0-a1d5-53f54472ad24"). InnerVolumeSpecName "kube-api-access-rjt5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.568610 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-logs" (OuterVolumeSpecName: "logs") pod "c4bc89c8-e39a-49e0-a1d5-53f54472ad24" (UID: "c4bc89c8-e39a-49e0-a1d5-53f54472ad24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.587496 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-config-data" (OuterVolumeSpecName: "config-data") pod "c4bc89c8-e39a-49e0-a1d5-53f54472ad24" (UID: "c4bc89c8-e39a-49e0-a1d5-53f54472ad24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.595095 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.595400 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.605393 5001 generic.go:334] "Generic (PLEG): container finished" podID="63a0e02a-650f-44c5-972d-502b035cdf12" containerID="761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818" exitCode=0 Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.605453 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63a0e02a-650f-44c5-972d-502b035cdf12","Type":"ContainerDied","Data":"761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818"} Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.605479 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"63a0e02a-650f-44c5-972d-502b035cdf12","Type":"ContainerDied","Data":"6bf1b0341c5870b81d4b27fa8733d2e36772a0282910d0b082946f8955259abe"} Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.605497 5001 scope.go:117] "RemoveContainer" containerID="761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.605609 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.611069 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.610967 5001 generic.go:334] "Generic (PLEG): container finished" podID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerID="6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192" exitCode=0 Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.611072 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"c4bc89c8-e39a-49e0-a1d5-53f54472ad24","Type":"ContainerDied","Data":"6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192"} Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.611216 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"c4bc89c8-e39a-49e0-a1d5-53f54472ad24","Type":"ContainerDied","Data":"e3d268250b8240613a53061b696bd6de3c6d34700c4fc49b3f3b79a4fe493d14"} Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.665741 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjt5j\" (UniqueName: \"kubernetes.io/projected/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-kube-api-access-rjt5j\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.665785 5001 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.665796 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bc89c8-e39a-49e0-a1d5-53f54472ad24-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.709992 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.718265 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.718958 5001 scope.go:117] "RemoveContainer" containerID="4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.739141 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.770932 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.795938 5001 scope.go:117] "RemoveContainer" containerID="761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.797522 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818\": container with ID starting with 761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818 not found: ID does not exist" containerID="761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.797656 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818"} err="failed to get container status \"761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818\": rpc error: code = NotFound desc = could not find container \"761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818\": container with ID starting with 761c1584fae705e7cf3c503cd3ea82993d1f6da2d27ab10937e09ff9af572818 not found: ID does not exist" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.797776 5001 scope.go:117] "RemoveContainer" containerID="4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.798375 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592\": container with ID starting with 4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592 not found: ID does not exist" containerID="4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.798489 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592"} err="failed to get container status \"4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592\": rpc error: code = NotFound desc = could not find container \"4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592\": container with ID starting with 4ca4e1adf3242d93c6909ec5bfd4aac60f63b0a8306101d1cc9f6e17bf8c1592 not found: ID does not exist" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.798572 5001 scope.go:117] "RemoveContainer" containerID="6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.798825 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.799483 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d576b9-5be5-4988-a627-4d6b96e55a64" containerName="nova-manage" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.799574 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d576b9-5be5-4988-a627-4d6b96e55a64" containerName="nova-manage" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.799734 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-log" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.799819 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-log" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.799899 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-metadata" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.799962 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-metadata" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.800063 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-log" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.800136 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-log" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.800240 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-api" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.800316 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-api" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.800625 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-api" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.800717 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d576b9-5be5-4988-a627-4d6b96e55a64" containerName="nova-manage" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.800797 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" containerName="nova-kuttl-api-log" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.800870 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-metadata" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.801027 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" containerName="nova-kuttl-metadata-log" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.803802 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.805587 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.807332 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.815179 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.822655 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.825385 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.825685 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.850528 5001 scope.go:117] "RemoveContainer" containerID="0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.881107 5001 scope.go:117] "RemoveContainer" containerID="6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.881507 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192\": container with ID starting with 6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192 not found: ID does not exist" containerID="6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.881590 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192"} err="failed to get container status \"6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192\": rpc error: code = NotFound desc = could not find container \"6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192\": container with ID starting with 6a74955ec06ebc91c7c4cacc437f97c90003217cfb401cd52831e007d666a192 not found: ID does not exist" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.881623 5001 scope.go:117] "RemoveContainer" containerID="0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a" Jan 28 17:47:39 crc kubenswrapper[5001]: E0128 17:47:39.882046 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a\": container with ID starting with 0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a not found: ID does not exist" containerID="0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.882091 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a"} err="failed to get container status \"0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a\": rpc error: code = NotFound desc = could not find container \"0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a\": container with ID starting with 0efba886227010f3c28ba75eb2b0180ff32cd0fccbe043e13782f80bf2b47e5a not found: ID does not exist" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.973828 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb877d2-6693-4274-a705-6551fe435fb2-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.973946 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb877d2-6693-4274-a705-6551fe435fb2-logs\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.974068 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03496-6661-435e-ae7f-c20ba5e7b381-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.974130 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8mp6\" (UniqueName: \"kubernetes.io/projected/cdc03496-6661-435e-ae7f-c20ba5e7b381-kube-api-access-f8mp6\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.974184 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03496-6661-435e-ae7f-c20ba5e7b381-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:39 crc kubenswrapper[5001]: I0128 17:47:39.974259 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zlq\" (UniqueName: \"kubernetes.io/projected/8bb877d2-6693-4274-a705-6551fe435fb2-kube-api-access-25zlq\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.036217 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.075320 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25zlq\" (UniqueName: \"kubernetes.io/projected/8bb877d2-6693-4274-a705-6551fe435fb2-kube-api-access-25zlq\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.075396 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb877d2-6693-4274-a705-6551fe435fb2-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.075469 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb877d2-6693-4274-a705-6551fe435fb2-logs\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.075516 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03496-6661-435e-ae7f-c20ba5e7b381-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.075571 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8mp6\" (UniqueName: \"kubernetes.io/projected/cdc03496-6661-435e-ae7f-c20ba5e7b381-kube-api-access-f8mp6\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.075596 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03496-6661-435e-ae7f-c20ba5e7b381-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.076279 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03496-6661-435e-ae7f-c20ba5e7b381-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.076431 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb877d2-6693-4274-a705-6551fe435fb2-logs\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.080308 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03496-6661-435e-ae7f-c20ba5e7b381-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.081616 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb877d2-6693-4274-a705-6551fe435fb2-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.100574 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25zlq\" (UniqueName: \"kubernetes.io/projected/8bb877d2-6693-4274-a705-6551fe435fb2-kube-api-access-25zlq\") pod \"nova-kuttl-api-0\" (UID: \"8bb877d2-6693-4274-a705-6551fe435fb2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.100727 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8mp6\" (UniqueName: \"kubernetes.io/projected/cdc03496-6661-435e-ae7f-c20ba5e7b381-kube-api-access-f8mp6\") pod \"nova-kuttl-metadata-0\" (UID: \"cdc03496-6661-435e-ae7f-c20ba5e7b381\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.142530 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.143861 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.177025 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4tzb\" (UniqueName: \"kubernetes.io/projected/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-kube-api-access-r4tzb\") pod \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.177091 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-config-data\") pod \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\" (UID: \"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515\") " Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.182398 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-kube-api-access-r4tzb" (OuterVolumeSpecName: "kube-api-access-r4tzb") pod "f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" (UID: "f2bb6527-e6fa-4eb7-96fe-53dfde2e7515"). InnerVolumeSpecName "kube-api-access-r4tzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.198608 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-config-data" (OuterVolumeSpecName: "config-data") pod "f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" (UID: "f2bb6527-e6fa-4eb7-96fe-53dfde2e7515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.279113 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4tzb\" (UniqueName: \"kubernetes.io/projected/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-kube-api-access-r4tzb\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.279442 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.576674 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.603804 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a0e02a-650f-44c5-972d-502b035cdf12" path="/var/lib/kubelet/pods/63a0e02a-650f-44c5-972d-502b035cdf12/volumes" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.605125 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bc89c8-e39a-49e0-a1d5-53f54472ad24" path="/var/lib/kubelet/pods/c4bc89c8-e39a-49e0-a1d5-53f54472ad24/volumes" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.620753 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cdc03496-6661-435e-ae7f-c20ba5e7b381","Type":"ContainerStarted","Data":"4147cf7d760aa32328bb23326b5d64308e018c5126623cc422edd7c4f64cb63a"} Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.625798 5001 generic.go:334] "Generic (PLEG): container finished" podID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" exitCode=0 Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.625838 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515","Type":"ContainerDied","Data":"4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143"} Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.625862 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f2bb6527-e6fa-4eb7-96fe-53dfde2e7515","Type":"ContainerDied","Data":"fad02df2aaa00233e0756a201f369267b6c20faef867d3feb060a3ce11ba2a48"} Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.625883 5001 scope.go:117] "RemoveContainer" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.626028 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.638573 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 17:47:40 crc kubenswrapper[5001]: W0128 17:47:40.647152 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb877d2_6693_4274_a705_6551fe435fb2.slice/crio-2e05982f8a2fd21a89dbce62e9d6fd077dc33db8b1be3cc5f34c2b259a574b70 WatchSource:0}: Error finding container 2e05982f8a2fd21a89dbce62e9d6fd077dc33db8b1be3cc5f34c2b259a574b70: Status 404 returned error can't find the container with id 2e05982f8a2fd21a89dbce62e9d6fd077dc33db8b1be3cc5f34c2b259a574b70 Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.670540 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.680482 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.683184 5001 scope.go:117] "RemoveContainer" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" Jan 28 17:47:40 crc kubenswrapper[5001]: E0128 17:47:40.686331 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143\": container with ID starting with 4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143 not found: ID does not exist" containerID="4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.687409 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143"} err="failed to get container status \"4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143\": rpc error: code = NotFound desc = could not find container \"4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143\": container with ID starting with 4c03119ad5038c6e18957537193643f6989a6cb9c25f8c8f74eb3df0ba55b143 not found: ID does not exist" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.701192 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:40 crc kubenswrapper[5001]: E0128 17:47:40.701772 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.701872 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.702174 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" containerName="nova-kuttl-scheduler-scheduler" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.702853 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.702935 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.706397 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.889190 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25214b-bf18-4e49-82d6-53519f9b2ccd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"7e25214b-bf18-4e49-82d6-53519f9b2ccd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.889586 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rws\" (UniqueName: \"kubernetes.io/projected/7e25214b-bf18-4e49-82d6-53519f9b2ccd-kube-api-access-w9rws\") pod \"nova-kuttl-scheduler-0\" (UID: \"7e25214b-bf18-4e49-82d6-53519f9b2ccd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.990757 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rws\" (UniqueName: \"kubernetes.io/projected/7e25214b-bf18-4e49-82d6-53519f9b2ccd-kube-api-access-w9rws\") pod \"nova-kuttl-scheduler-0\" (UID: \"7e25214b-bf18-4e49-82d6-53519f9b2ccd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.990864 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25214b-bf18-4e49-82d6-53519f9b2ccd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"7e25214b-bf18-4e49-82d6-53519f9b2ccd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:40 crc kubenswrapper[5001]: I0128 17:47:40.996576 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25214b-bf18-4e49-82d6-53519f9b2ccd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"7e25214b-bf18-4e49-82d6-53519f9b2ccd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.007632 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rws\" (UniqueName: \"kubernetes.io/projected/7e25214b-bf18-4e49-82d6-53519f9b2ccd-kube-api-access-w9rws\") pod \"nova-kuttl-scheduler-0\" (UID: \"7e25214b-bf18-4e49-82d6-53519f9b2ccd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.022327 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.479239 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 17:47:41 crc kubenswrapper[5001]: W0128 17:47:41.483894 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e25214b_bf18_4e49_82d6_53519f9b2ccd.slice/crio-d1c271648d4d534f9b3d47ac3acde9781c5db10113acad4e99f748238eeec6aa WatchSource:0}: Error finding container d1c271648d4d534f9b3d47ac3acde9781c5db10113acad4e99f748238eeec6aa: Status 404 returned error can't find the container with id d1c271648d4d534f9b3d47ac3acde9781c5db10113acad4e99f748238eeec6aa Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.636883 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"7e25214b-bf18-4e49-82d6-53519f9b2ccd","Type":"ContainerStarted","Data":"d1c271648d4d534f9b3d47ac3acde9781c5db10113acad4e99f748238eeec6aa"} Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.641923 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cdc03496-6661-435e-ae7f-c20ba5e7b381","Type":"ContainerStarted","Data":"aabe2b37512f1822227224c71bf417bbb90d0ffed92e9a602bad8a9c423a504e"} Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.642020 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cdc03496-6661-435e-ae7f-c20ba5e7b381","Type":"ContainerStarted","Data":"4f3d68bea680bb10487aad1a9eccf3e83b122708b5168d0ecafd26dfd88d3503"} Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.644530 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8bb877d2-6693-4274-a705-6551fe435fb2","Type":"ContainerStarted","Data":"cc3926340d63ccbaba49753dbc2833b3447b71b59d30209794607036ba1336e5"} Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.644564 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8bb877d2-6693-4274-a705-6551fe435fb2","Type":"ContainerStarted","Data":"99f5419b2c1fc9f65b0c8ac92afef0c24d2a5f8c217bb555c9e128ef4fe55e71"} Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.644576 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8bb877d2-6693-4274-a705-6551fe435fb2","Type":"ContainerStarted","Data":"2e05982f8a2fd21a89dbce62e9d6fd077dc33db8b1be3cc5f34c2b259a574b70"} Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.665221 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.665205051 podStartE2EDuration="2.665205051s" podCreationTimestamp="2026-01-28 17:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:41.659278761 +0000 UTC m=+1907.827066991" watchObservedRunningTime="2026-01-28 17:47:41.665205051 +0000 UTC m=+1907.832993281" Jan 28 17:47:41 crc kubenswrapper[5001]: I0128 17:47:41.676552 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.676533068 podStartE2EDuration="2.676533068s" podCreationTimestamp="2026-01-28 17:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:41.675658673 +0000 UTC m=+1907.843446913" watchObservedRunningTime="2026-01-28 17:47:41.676533068 +0000 UTC m=+1907.844321298" Jan 28 17:47:42 crc kubenswrapper[5001]: I0128 17:47:42.621697 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bb6527-e6fa-4eb7-96fe-53dfde2e7515" path="/var/lib/kubelet/pods/f2bb6527-e6fa-4eb7-96fe-53dfde2e7515/volumes" Jan 28 17:47:42 crc kubenswrapper[5001]: I0128 17:47:42.654924 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"7e25214b-bf18-4e49-82d6-53519f9b2ccd","Type":"ContainerStarted","Data":"ab5a2f83c2a740e251dccd410fe181957de1a5a6d1becfac9c8bababf0b97ba2"} Jan 28 17:47:42 crc kubenswrapper[5001]: I0128 17:47:42.673580 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.67353767 podStartE2EDuration="2.67353767s" podCreationTimestamp="2026-01-28 17:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:47:42.673025816 +0000 UTC m=+1908.840814046" watchObservedRunningTime="2026-01-28 17:47:42.67353767 +0000 UTC m=+1908.841325900" Jan 28 17:47:45 crc kubenswrapper[5001]: I0128 17:47:45.144077 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:45 crc kubenswrapper[5001]: I0128 17:47:45.145043 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:46 crc kubenswrapper[5001]: I0128 17:47:46.022913 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:50 crc kubenswrapper[5001]: I0128 17:47:50.143517 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:50 crc kubenswrapper[5001]: I0128 17:47:50.143858 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:47:50 crc kubenswrapper[5001]: I0128 17:47:50.145152 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:50 crc kubenswrapper[5001]: I0128 17:47:50.145175 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.023434 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.062395 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.307386 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="cdc03496-6661-435e-ae7f-c20ba5e7b381" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.232:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.307411 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="cdc03496-6661-435e-ae7f-c20ba5e7b381" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.232:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.307425 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8bb877d2-6693-4274-a705-6551fe435fb2" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.308465 5001 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8bb877d2-6693-4274-a705-6551fe435fb2" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:47:51 crc kubenswrapper[5001]: I0128 17:47:51.780875 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 17:47:53 crc kubenswrapper[5001]: I0128 17:47:53.594755 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:47:53 crc kubenswrapper[5001]: E0128 17:47:53.595772 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.153358 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.160914 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.161475 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.161904 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.165740 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.219644 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.220465 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.833763 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.836208 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 17:48:00 crc kubenswrapper[5001]: I0128 17:48:00.837371 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.938552 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst"] Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.939912 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.942737 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.942817 5001 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.950955 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst"] Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.985958 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbqv2\" (UniqueName: \"kubernetes.io/projected/44595eef-a540-442d-8c7a-5f8bd2f2488c-kube-api-access-cbqv2\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.986034 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-scripts\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:02 crc kubenswrapper[5001]: I0128 17:48:02.986104 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-config-data\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.088057 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-scripts\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.088191 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-config-data\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.088305 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbqv2\" (UniqueName: \"kubernetes.io/projected/44595eef-a540-442d-8c7a-5f8bd2f2488c-kube-api-access-cbqv2\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.095829 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-config-data\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.095850 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-scripts\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.111776 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbqv2\" (UniqueName: \"kubernetes.io/projected/44595eef-a540-442d-8c7a-5f8bd2f2488c-kube-api-access-cbqv2\") pod \"nova-kuttl-cell1-cell-delete-tvxst\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.277646 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.694381 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst"] Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.857306 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"e57c136d37d63cc3d5f9a943c56e32ec88dc6139fc5ed86570df73f4b1294afc"} Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.857662 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"be5b666b26347840f99268b8cb544f3ba0821401199e6689e359a938b6c273da"} Jan 28 17:48:03 crc kubenswrapper[5001]: I0128 17:48:03.869488 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podStartSLOduration=1.869469166 podStartE2EDuration="1.869469166s" podCreationTimestamp="2026-01-28 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:48:03.868637072 +0000 UTC m=+1930.036425312" watchObservedRunningTime="2026-01-28 17:48:03.869469166 +0000 UTC m=+1930.037257396" Jan 28 17:48:05 crc kubenswrapper[5001]: I0128 17:48:05.595236 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:48:05 crc kubenswrapper[5001]: E0128 17:48:05.595714 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:48:08 crc kubenswrapper[5001]: I0128 17:48:08.904511 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="e57c136d37d63cc3d5f9a943c56e32ec88dc6139fc5ed86570df73f4b1294afc" exitCode=2 Jan 28 17:48:08 crc kubenswrapper[5001]: I0128 17:48:08.904588 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"e57c136d37d63cc3d5f9a943c56e32ec88dc6139fc5ed86570df73f4b1294afc"} Jan 28 17:48:08 crc kubenswrapper[5001]: I0128 17:48:08.905315 5001 scope.go:117] "RemoveContainer" containerID="e57c136d37d63cc3d5f9a943c56e32ec88dc6139fc5ed86570df73f4b1294afc" Jan 28 17:48:09 crc kubenswrapper[5001]: I0128 17:48:09.915461 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"9b10456d1126441eb4ab694d63bad7f671973dc87c92804b9201eaa80d4843e2"} Jan 28 17:48:13 crc kubenswrapper[5001]: I0128 17:48:13.947607 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="9b10456d1126441eb4ab694d63bad7f671973dc87c92804b9201eaa80d4843e2" exitCode=2 Jan 28 17:48:13 crc kubenswrapper[5001]: I0128 17:48:13.947674 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"9b10456d1126441eb4ab694d63bad7f671973dc87c92804b9201eaa80d4843e2"} Jan 28 17:48:13 crc kubenswrapper[5001]: I0128 17:48:13.948156 5001 scope.go:117] "RemoveContainer" containerID="e57c136d37d63cc3d5f9a943c56e32ec88dc6139fc5ed86570df73f4b1294afc" Jan 28 17:48:13 crc kubenswrapper[5001]: I0128 17:48:13.948639 5001 scope.go:117] "RemoveContainer" containerID="9b10456d1126441eb4ab694d63bad7f671973dc87c92804b9201eaa80d4843e2" Jan 28 17:48:13 crc kubenswrapper[5001]: E0128 17:48:13.948840 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:48:16 crc kubenswrapper[5001]: I0128 17:48:16.594546 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:48:16 crc kubenswrapper[5001]: E0128 17:48:16.595152 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:48:29 crc kubenswrapper[5001]: I0128 17:48:29.594657 5001 scope.go:117] "RemoveContainer" containerID="9b10456d1126441eb4ab694d63bad7f671973dc87c92804b9201eaa80d4843e2" Jan 28 17:48:30 crc kubenswrapper[5001]: I0128 17:48:30.108216 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e"} Jan 28 17:48:30 crc kubenswrapper[5001]: I0128 17:48:30.594033 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:48:30 crc kubenswrapper[5001]: E0128 17:48:30.594350 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.399561 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5gvzs"] Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.401707 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.408443 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gvzs"] Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.582520 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-catalog-content\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.582587 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-utilities\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.582635 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xvc\" (UniqueName: \"kubernetes.io/projected/827618f5-fa57-4b0e-aa43-02a94998a709-kube-api-access-94xvc\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.684193 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-catalog-content\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.684291 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-utilities\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.684363 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94xvc\" (UniqueName: \"kubernetes.io/projected/827618f5-fa57-4b0e-aa43-02a94998a709-kube-api-access-94xvc\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.684708 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-catalog-content\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.684769 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-utilities\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.710542 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94xvc\" (UniqueName: \"kubernetes.io/projected/827618f5-fa57-4b0e-aa43-02a94998a709-kube-api-access-94xvc\") pod \"redhat-marketplace-5gvzs\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:33 crc kubenswrapper[5001]: I0128 17:48:33.733277 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:34 crc kubenswrapper[5001]: I0128 17:48:34.215382 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gvzs"] Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.150818 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e" exitCode=2 Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.150915 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e"} Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.151307 5001 scope.go:117] "RemoveContainer" containerID="9b10456d1126441eb4ab694d63bad7f671973dc87c92804b9201eaa80d4843e2" Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.151886 5001 scope.go:117] "RemoveContainer" containerID="5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e" Jan 28 17:48:35 crc kubenswrapper[5001]: E0128 17:48:35.152124 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.155080 5001 generic.go:334] "Generic (PLEG): container finished" podID="827618f5-fa57-4b0e-aa43-02a94998a709" containerID="0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8" exitCode=0 Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.155118 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gvzs" event={"ID":"827618f5-fa57-4b0e-aa43-02a94998a709","Type":"ContainerDied","Data":"0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8"} Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.155141 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gvzs" event={"ID":"827618f5-fa57-4b0e-aa43-02a94998a709","Type":"ContainerStarted","Data":"40d830c7b63cecc02754ea6fb152100e9b0ae846e3d519d1b80b427b0c46504d"} Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.808881 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wpqp4"] Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.811329 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.821342 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpqp4"] Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.915374 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-catalog-content\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.915476 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-utilities\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:35 crc kubenswrapper[5001]: I0128 17:48:35.915515 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlq9b\" (UniqueName: \"kubernetes.io/projected/ecceb50b-01e9-48bf-91c1-73a33d9869f7-kube-api-access-dlq9b\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.005100 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4ggl"] Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.007219 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.015153 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4ggl"] Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.016877 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-utilities\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.016940 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlq9b\" (UniqueName: \"kubernetes.io/projected/ecceb50b-01e9-48bf-91c1-73a33d9869f7-kube-api-access-dlq9b\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.017031 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-catalog-content\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.017580 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-utilities\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.017599 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-catalog-content\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.050998 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlq9b\" (UniqueName: \"kubernetes.io/projected/ecceb50b-01e9-48bf-91c1-73a33d9869f7-kube-api-access-dlq9b\") pod \"community-operators-wpqp4\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.118543 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-utilities\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.118597 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwx5b\" (UniqueName: \"kubernetes.io/projected/2d515959-8bb6-48df-a4b1-01160e237e25-kube-api-access-bwx5b\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.118653 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-catalog-content\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.135939 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.220374 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-catalog-content\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.220519 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-utilities\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.220561 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwx5b\" (UniqueName: \"kubernetes.io/projected/2d515959-8bb6-48df-a4b1-01160e237e25-kube-api-access-bwx5b\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.221106 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-catalog-content\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.221179 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-utilities\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.245625 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwx5b\" (UniqueName: \"kubernetes.io/projected/2d515959-8bb6-48df-a4b1-01160e237e25-kube-api-access-bwx5b\") pod \"redhat-operators-s4ggl\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.352169 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.490766 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpqp4"] Jan 28 17:48:36 crc kubenswrapper[5001]: I0128 17:48:36.711734 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4ggl"] Jan 28 17:48:36 crc kubenswrapper[5001]: W0128 17:48:36.718541 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d515959_8bb6_48df_a4b1_01160e237e25.slice/crio-936175fcb27133dab4b55ba26b8eef9bc2185cb199a1852f1a94b543e0b8cfc0 WatchSource:0}: Error finding container 936175fcb27133dab4b55ba26b8eef9bc2185cb199a1852f1a94b543e0b8cfc0: Status 404 returned error can't find the container with id 936175fcb27133dab4b55ba26b8eef9bc2185cb199a1852f1a94b543e0b8cfc0 Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.181473 5001 generic.go:334] "Generic (PLEG): container finished" podID="2d515959-8bb6-48df-a4b1-01160e237e25" containerID="e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9" exitCode=0 Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.181521 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerDied","Data":"e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9"} Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.181839 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerStarted","Data":"936175fcb27133dab4b55ba26b8eef9bc2185cb199a1852f1a94b543e0b8cfc0"} Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.184173 5001 generic.go:334] "Generic (PLEG): container finished" podID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerID="d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941" exitCode=0 Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.184235 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpqp4" event={"ID":"ecceb50b-01e9-48bf-91c1-73a33d9869f7","Type":"ContainerDied","Data":"d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941"} Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.184259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpqp4" event={"ID":"ecceb50b-01e9-48bf-91c1-73a33d9869f7","Type":"ContainerStarted","Data":"316767f4fdebb8af1f8dfcd8b7e128465198a57f5fbf89d8fa2c0245ddc408fb"} Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.187250 5001 generic.go:334] "Generic (PLEG): container finished" podID="827618f5-fa57-4b0e-aa43-02a94998a709" containerID="0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d" exitCode=0 Jan 28 17:48:37 crc kubenswrapper[5001]: I0128 17:48:37.187288 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gvzs" event={"ID":"827618f5-fa57-4b0e-aa43-02a94998a709","Type":"ContainerDied","Data":"0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d"} Jan 28 17:48:38 crc kubenswrapper[5001]: I0128 17:48:38.199579 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gvzs" event={"ID":"827618f5-fa57-4b0e-aa43-02a94998a709","Type":"ContainerStarted","Data":"3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369"} Jan 28 17:48:38 crc kubenswrapper[5001]: I0128 17:48:38.203695 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerStarted","Data":"a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d"} Jan 28 17:48:38 crc kubenswrapper[5001]: I0128 17:48:38.208691 5001 generic.go:334] "Generic (PLEG): container finished" podID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerID="a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684" exitCode=0 Jan 28 17:48:38 crc kubenswrapper[5001]: I0128 17:48:38.208727 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpqp4" event={"ID":"ecceb50b-01e9-48bf-91c1-73a33d9869f7","Type":"ContainerDied","Data":"a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684"} Jan 28 17:48:38 crc kubenswrapper[5001]: I0128 17:48:38.227092 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5gvzs" podStartSLOduration=2.806841856 podStartE2EDuration="5.227073467s" podCreationTimestamp="2026-01-28 17:48:33 +0000 UTC" firstStartedPulling="2026-01-28 17:48:35.173163634 +0000 UTC m=+1961.340951864" lastFinishedPulling="2026-01-28 17:48:37.593395245 +0000 UTC m=+1963.761183475" observedRunningTime="2026-01-28 17:48:38.220304611 +0000 UTC m=+1964.388092851" watchObservedRunningTime="2026-01-28 17:48:38.227073467 +0000 UTC m=+1964.394861717" Jan 28 17:48:39 crc kubenswrapper[5001]: I0128 17:48:39.220428 5001 generic.go:334] "Generic (PLEG): container finished" podID="2d515959-8bb6-48df-a4b1-01160e237e25" containerID="a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d" exitCode=0 Jan 28 17:48:39 crc kubenswrapper[5001]: I0128 17:48:39.220527 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerDied","Data":"a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d"} Jan 28 17:48:40 crc kubenswrapper[5001]: I0128 17:48:40.232150 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerStarted","Data":"a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230"} Jan 28 17:48:40 crc kubenswrapper[5001]: I0128 17:48:40.234384 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpqp4" event={"ID":"ecceb50b-01e9-48bf-91c1-73a33d9869f7","Type":"ContainerStarted","Data":"62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663"} Jan 28 17:48:40 crc kubenswrapper[5001]: I0128 17:48:40.283443 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4ggl" podStartSLOduration=2.519604476 podStartE2EDuration="5.28342348s" podCreationTimestamp="2026-01-28 17:48:35 +0000 UTC" firstStartedPulling="2026-01-28 17:48:37.183750667 +0000 UTC m=+1963.351538897" lastFinishedPulling="2026-01-28 17:48:39.947569671 +0000 UTC m=+1966.115357901" observedRunningTime="2026-01-28 17:48:40.259293744 +0000 UTC m=+1966.427081974" watchObservedRunningTime="2026-01-28 17:48:40.28342348 +0000 UTC m=+1966.451211710" Jan 28 17:48:40 crc kubenswrapper[5001]: I0128 17:48:40.288181 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wpqp4" podStartSLOduration=3.279075966 podStartE2EDuration="5.288163277s" podCreationTimestamp="2026-01-28 17:48:35 +0000 UTC" firstStartedPulling="2026-01-28 17:48:37.185614021 +0000 UTC m=+1963.353402251" lastFinishedPulling="2026-01-28 17:48:39.194701322 +0000 UTC m=+1965.362489562" observedRunningTime="2026-01-28 17:48:40.279914679 +0000 UTC m=+1966.447702909" watchObservedRunningTime="2026-01-28 17:48:40.288163277 +0000 UTC m=+1966.455951507" Jan 28 17:48:43 crc kubenswrapper[5001]: I0128 17:48:43.733799 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:43 crc kubenswrapper[5001]: I0128 17:48:43.733871 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:43 crc kubenswrapper[5001]: I0128 17:48:43.786497 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:44 crc kubenswrapper[5001]: I0128 17:48:44.337649 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:44 crc kubenswrapper[5001]: I0128 17:48:44.598964 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:48:44 crc kubenswrapper[5001]: E0128 17:48:44.599404 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.136306 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.136714 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.190228 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gvzs"] Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.191167 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.291893 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5gvzs" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="registry-server" containerID="cri-o://3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369" gracePeriod=2 Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.343878 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.353082 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.353126 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:46 crc kubenswrapper[5001]: I0128 17:48:46.400314 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.273102 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.308194 5001 generic.go:334] "Generic (PLEG): container finished" podID="827618f5-fa57-4b0e-aa43-02a94998a709" containerID="3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369" exitCode=0 Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.309131 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gvzs" event={"ID":"827618f5-fa57-4b0e-aa43-02a94998a709","Type":"ContainerDied","Data":"3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369"} Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.309171 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gvzs" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.309199 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gvzs" event={"ID":"827618f5-fa57-4b0e-aa43-02a94998a709","Type":"ContainerDied","Data":"40d830c7b63cecc02754ea6fb152100e9b0ae846e3d519d1b80b427b0c46504d"} Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.309230 5001 scope.go:117] "RemoveContainer" containerID="3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.333435 5001 scope.go:117] "RemoveContainer" containerID="0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.354573 5001 scope.go:117] "RemoveContainer" containerID="0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.386646 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.387755 5001 scope.go:117] "RemoveContainer" containerID="3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369" Jan 28 17:48:47 crc kubenswrapper[5001]: E0128 17:48:47.391592 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369\": container with ID starting with 3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369 not found: ID does not exist" containerID="3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.391632 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369"} err="failed to get container status \"3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369\": rpc error: code = NotFound desc = could not find container \"3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369\": container with ID starting with 3d757dfbba72d8a167a0d9490717e0eea190a1b1d6cfa097b03635541446e369 not found: ID does not exist" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.391664 5001 scope.go:117] "RemoveContainer" containerID="0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d" Jan 28 17:48:47 crc kubenswrapper[5001]: E0128 17:48:47.392039 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d\": container with ID starting with 0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d not found: ID does not exist" containerID="0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.392073 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d"} err="failed to get container status \"0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d\": rpc error: code = NotFound desc = could not find container \"0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d\": container with ID starting with 0f417e617b2b20d300d5a350798ade02ba6669ebd31a3c2b6a3b981cf6d2d47d not found: ID does not exist" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.392089 5001 scope.go:117] "RemoveContainer" containerID="0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8" Jan 28 17:48:47 crc kubenswrapper[5001]: E0128 17:48:47.392490 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8\": container with ID starting with 0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8 not found: ID does not exist" containerID="0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.392600 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8"} err="failed to get container status \"0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8\": rpc error: code = NotFound desc = could not find container \"0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8\": container with ID starting with 0b3cbec503c71772d25d39f3f6debacb70676f5ec70da249852968ecd2b945b8 not found: ID does not exist" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.419262 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-catalog-content\") pod \"827618f5-fa57-4b0e-aa43-02a94998a709\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.419321 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-utilities\") pod \"827618f5-fa57-4b0e-aa43-02a94998a709\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.419391 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94xvc\" (UniqueName: \"kubernetes.io/projected/827618f5-fa57-4b0e-aa43-02a94998a709-kube-api-access-94xvc\") pod \"827618f5-fa57-4b0e-aa43-02a94998a709\" (UID: \"827618f5-fa57-4b0e-aa43-02a94998a709\") " Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.420541 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-utilities" (OuterVolumeSpecName: "utilities") pod "827618f5-fa57-4b0e-aa43-02a94998a709" (UID: "827618f5-fa57-4b0e-aa43-02a94998a709"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.426118 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827618f5-fa57-4b0e-aa43-02a94998a709-kube-api-access-94xvc" (OuterVolumeSpecName: "kube-api-access-94xvc") pod "827618f5-fa57-4b0e-aa43-02a94998a709" (UID: "827618f5-fa57-4b0e-aa43-02a94998a709"). InnerVolumeSpecName "kube-api-access-94xvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.442694 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "827618f5-fa57-4b0e-aa43-02a94998a709" (UID: "827618f5-fa57-4b0e-aa43-02a94998a709"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.521303 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.521332 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/827618f5-fa57-4b0e-aa43-02a94998a709-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.521345 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94xvc\" (UniqueName: \"kubernetes.io/projected/827618f5-fa57-4b0e-aa43-02a94998a709-kube-api-access-94xvc\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.663594 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gvzs"] Jan 28 17:48:47 crc kubenswrapper[5001]: I0128 17:48:47.675022 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gvzs"] Jan 28 17:48:48 crc kubenswrapper[5001]: I0128 17:48:48.591601 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wpqp4"] Jan 28 17:48:48 crc kubenswrapper[5001]: I0128 17:48:48.591806 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wpqp4" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="registry-server" containerID="cri-o://62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663" gracePeriod=2 Jan 28 17:48:48 crc kubenswrapper[5001]: I0128 17:48:48.604515 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" path="/var/lib/kubelet/pods/827618f5-fa57-4b0e-aa43-02a94998a709/volumes" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.120719 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.251227 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-utilities\") pod \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.251317 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-catalog-content\") pod \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.251406 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlq9b\" (UniqueName: \"kubernetes.io/projected/ecceb50b-01e9-48bf-91c1-73a33d9869f7-kube-api-access-dlq9b\") pod \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\" (UID: \"ecceb50b-01e9-48bf-91c1-73a33d9869f7\") " Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.252336 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-utilities" (OuterVolumeSpecName: "utilities") pod "ecceb50b-01e9-48bf-91c1-73a33d9869f7" (UID: "ecceb50b-01e9-48bf-91c1-73a33d9869f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.255626 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecceb50b-01e9-48bf-91c1-73a33d9869f7-kube-api-access-dlq9b" (OuterVolumeSpecName: "kube-api-access-dlq9b") pod "ecceb50b-01e9-48bf-91c1-73a33d9869f7" (UID: "ecceb50b-01e9-48bf-91c1-73a33d9869f7"). InnerVolumeSpecName "kube-api-access-dlq9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.308217 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecceb50b-01e9-48bf-91c1-73a33d9869f7" (UID: "ecceb50b-01e9-48bf-91c1-73a33d9869f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.334345 5001 generic.go:334] "Generic (PLEG): container finished" podID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerID="62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663" exitCode=0 Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.334427 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpqp4" event={"ID":"ecceb50b-01e9-48bf-91c1-73a33d9869f7","Type":"ContainerDied","Data":"62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663"} Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.334458 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpqp4" event={"ID":"ecceb50b-01e9-48bf-91c1-73a33d9869f7","Type":"ContainerDied","Data":"316767f4fdebb8af1f8dfcd8b7e128465198a57f5fbf89d8fa2c0245ddc408fb"} Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.334481 5001 scope.go:117] "RemoveContainer" containerID="62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.334690 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpqp4" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.354815 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.354846 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecceb50b-01e9-48bf-91c1-73a33d9869f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.354856 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlq9b\" (UniqueName: \"kubernetes.io/projected/ecceb50b-01e9-48bf-91c1-73a33d9869f7-kube-api-access-dlq9b\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.358126 5001 scope.go:117] "RemoveContainer" containerID="a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.370586 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wpqp4"] Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.378267 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wpqp4"] Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.394909 5001 scope.go:117] "RemoveContainer" containerID="d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.420145 5001 scope.go:117] "RemoveContainer" containerID="62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663" Jan 28 17:48:49 crc kubenswrapper[5001]: E0128 17:48:49.420566 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663\": container with ID starting with 62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663 not found: ID does not exist" containerID="62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.420599 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663"} err="failed to get container status \"62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663\": rpc error: code = NotFound desc = could not find container \"62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663\": container with ID starting with 62e77a6a90114306350af7ae0e1cdfa42eb8f3e5ec5927fdebd9bbf1b2b26663 not found: ID does not exist" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.420622 5001 scope.go:117] "RemoveContainer" containerID="a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684" Jan 28 17:48:49 crc kubenswrapper[5001]: E0128 17:48:49.421022 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684\": container with ID starting with a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684 not found: ID does not exist" containerID="a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.421077 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684"} err="failed to get container status \"a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684\": rpc error: code = NotFound desc = could not find container \"a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684\": container with ID starting with a01d2a0f638909657e594cd892efe6e42b4d01f02230d02638e31172d6b78684 not found: ID does not exist" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.421103 5001 scope.go:117] "RemoveContainer" containerID="d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941" Jan 28 17:48:49 crc kubenswrapper[5001]: E0128 17:48:49.421476 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941\": container with ID starting with d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941 not found: ID does not exist" containerID="d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.421602 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941"} err="failed to get container status \"d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941\": rpc error: code = NotFound desc = could not find container \"d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941\": container with ID starting with d11cd94b120a8a7311bd1c2201023765f7108aa17c2032b79f07f33209de5941 not found: ID does not exist" Jan 28 17:48:49 crc kubenswrapper[5001]: I0128 17:48:49.594059 5001 scope.go:117] "RemoveContainer" containerID="5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e" Jan 28 17:48:49 crc kubenswrapper[5001]: E0128 17:48:49.594671 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.390112 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4ggl"] Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.390598 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s4ggl" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="registry-server" containerID="cri-o://a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230" gracePeriod=2 Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.605239 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" path="/var/lib/kubelet/pods/ecceb50b-01e9-48bf-91c1-73a33d9869f7/volumes" Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.777736 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.877555 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-utilities\") pod \"2d515959-8bb6-48df-a4b1-01160e237e25\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.877619 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwx5b\" (UniqueName: \"kubernetes.io/projected/2d515959-8bb6-48df-a4b1-01160e237e25-kube-api-access-bwx5b\") pod \"2d515959-8bb6-48df-a4b1-01160e237e25\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.877640 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-catalog-content\") pod \"2d515959-8bb6-48df-a4b1-01160e237e25\" (UID: \"2d515959-8bb6-48df-a4b1-01160e237e25\") " Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.878914 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-utilities" (OuterVolumeSpecName: "utilities") pod "2d515959-8bb6-48df-a4b1-01160e237e25" (UID: "2d515959-8bb6-48df-a4b1-01160e237e25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.883193 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d515959-8bb6-48df-a4b1-01160e237e25-kube-api-access-bwx5b" (OuterVolumeSpecName: "kube-api-access-bwx5b") pod "2d515959-8bb6-48df-a4b1-01160e237e25" (UID: "2d515959-8bb6-48df-a4b1-01160e237e25"). InnerVolumeSpecName "kube-api-access-bwx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.979090 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:50 crc kubenswrapper[5001]: I0128 17:48:50.979123 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwx5b\" (UniqueName: \"kubernetes.io/projected/2d515959-8bb6-48df-a4b1-01160e237e25-kube-api-access-bwx5b\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.001019 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d515959-8bb6-48df-a4b1-01160e237e25" (UID: "2d515959-8bb6-48df-a4b1-01160e237e25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.080952 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d515959-8bb6-48df-a4b1-01160e237e25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.365310 5001 generic.go:334] "Generic (PLEG): container finished" podID="2d515959-8bb6-48df-a4b1-01160e237e25" containerID="a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230" exitCode=0 Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.365387 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerDied","Data":"a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230"} Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.365691 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4ggl" event={"ID":"2d515959-8bb6-48df-a4b1-01160e237e25","Type":"ContainerDied","Data":"936175fcb27133dab4b55ba26b8eef9bc2185cb199a1852f1a94b543e0b8cfc0"} Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.365447 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4ggl" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.365772 5001 scope.go:117] "RemoveContainer" containerID="a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.393446 5001 scope.go:117] "RemoveContainer" containerID="a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.403422 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4ggl"] Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.411341 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s4ggl"] Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.439076 5001 scope.go:117] "RemoveContainer" containerID="e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.465011 5001 scope.go:117] "RemoveContainer" containerID="a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230" Jan 28 17:48:51 crc kubenswrapper[5001]: E0128 17:48:51.465561 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230\": container with ID starting with a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230 not found: ID does not exist" containerID="a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.465615 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230"} err="failed to get container status \"a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230\": rpc error: code = NotFound desc = could not find container \"a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230\": container with ID starting with a72b24ca90976f23a5a05207454869c06c747761da635bb8064ea652fd36b230 not found: ID does not exist" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.465648 5001 scope.go:117] "RemoveContainer" containerID="a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d" Jan 28 17:48:51 crc kubenswrapper[5001]: E0128 17:48:51.466206 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d\": container with ID starting with a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d not found: ID does not exist" containerID="a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.466233 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d"} err="failed to get container status \"a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d\": rpc error: code = NotFound desc = could not find container \"a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d\": container with ID starting with a29e607ba8d1ad2430a1c7412bb2805b3ce527b42b7dc2135d7c95cd3abdb35d not found: ID does not exist" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.466247 5001 scope.go:117] "RemoveContainer" containerID="e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9" Jan 28 17:48:51 crc kubenswrapper[5001]: E0128 17:48:51.466574 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9\": container with ID starting with e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9 not found: ID does not exist" containerID="e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9" Jan 28 17:48:51 crc kubenswrapper[5001]: I0128 17:48:51.466596 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9"} err="failed to get container status \"e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9\": rpc error: code = NotFound desc = could not find container \"e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9\": container with ID starting with e933094b90ad95f78f0e780427eb2d1d674fd425e456d0c30bcd72e644a5fba9 not found: ID does not exist" Jan 28 17:48:52 crc kubenswrapper[5001]: I0128 17:48:52.615789 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" path="/var/lib/kubelet/pods/2d515959-8bb6-48df-a4b1-01160e237e25/volumes" Jan 28 17:48:57 crc kubenswrapper[5001]: I0128 17:48:57.594488 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:48:57 crc kubenswrapper[5001]: E0128 17:48:57.595134 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:48:59 crc kubenswrapper[5001]: I0128 17:48:59.977656 5001 scope.go:117] "RemoveContainer" containerID="cfd7385966844cde05395abce577ee859280fbc860cece7f0e975366c1fe3273" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.010526 5001 scope.go:117] "RemoveContainer" containerID="4d18221118aec7722936677c613feef493599337945a733fa02cf935feb62952" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.074211 5001 scope.go:117] "RemoveContainer" containerID="1cbdae035b35c91eb80a0768161a9fce9beb3df3d1d52410a3e8bd8dbb94b566" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.091512 5001 scope.go:117] "RemoveContainer" containerID="e27297bdb546fbbcfdf66dd7f6a4608570c116d79fa831e27a155b5f3e9b77bd" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.125387 5001 scope.go:117] "RemoveContainer" containerID="77db723e95a5025abcdffb0e2dcd81478fde4906f3fd6c48fcd24a0bfc58f003" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.175268 5001 scope.go:117] "RemoveContainer" containerID="d03933df35afaa9bcc74e4255c2123d197033695fe6d97b42837411ee3e8acbf" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.202922 5001 scope.go:117] "RemoveContainer" containerID="c4277f1a5701d2934afceafcc83652d5616fa398f2caa7ec70e5eaec890cbd69" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.222606 5001 scope.go:117] "RemoveContainer" containerID="991da8b045406b2b7509928bd3ef2787781c60fc39634293a913840f32d8aba3" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.253944 5001 scope.go:117] "RemoveContainer" containerID="4eb5b5411e5f8966a65f821d3a17bd93860a7ec243228cb0dc5e93952ee008d0" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.273641 5001 scope.go:117] "RemoveContainer" containerID="b0c6ad716ebec2128d1638c90aaf9082b6b61f49db17f75126c559ce249891ef" Jan 28 17:49:00 crc kubenswrapper[5001]: I0128 17:49:00.289672 5001 scope.go:117] "RemoveContainer" containerID="34a4e140e1f55be22484c8c22761693863cf40b1c5716d185f368d2de31c57dc" Jan 28 17:49:03 crc kubenswrapper[5001]: I0128 17:49:03.594443 5001 scope.go:117] "RemoveContainer" containerID="5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e" Jan 28 17:49:04 crc kubenswrapper[5001]: I0128 17:49:04.478086 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93"} Jan 28 17:49:08 crc kubenswrapper[5001]: I0128 17:49:08.510936 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" exitCode=2 Jan 28 17:49:08 crc kubenswrapper[5001]: I0128 17:49:08.511013 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93"} Jan 28 17:49:08 crc kubenswrapper[5001]: I0128 17:49:08.511344 5001 scope.go:117] "RemoveContainer" containerID="5157d237bfeac5eac7454d251ab1119f3fe153811f2fb78ebb19d886f894e69e" Jan 28 17:49:08 crc kubenswrapper[5001]: I0128 17:49:08.512584 5001 scope.go:117] "RemoveContainer" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" Jan 28 17:49:08 crc kubenswrapper[5001]: E0128 17:49:08.513123 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:49:12 crc kubenswrapper[5001]: I0128 17:49:12.595038 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:49:13 crc kubenswrapper[5001]: I0128 17:49:13.579442 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"a79c07227a7c2c6a248b779d6d6d0731af5b42260a236321116a9fb26f5132cb"} Jan 28 17:49:19 crc kubenswrapper[5001]: I0128 17:49:19.594356 5001 scope.go:117] "RemoveContainer" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" Jan 28 17:49:19 crc kubenswrapper[5001]: E0128 17:49:19.595174 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:49:33 crc kubenswrapper[5001]: I0128 17:49:33.593933 5001 scope.go:117] "RemoveContainer" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" Jan 28 17:49:33 crc kubenswrapper[5001]: E0128 17:49:33.594608 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:49:45 crc kubenswrapper[5001]: I0128 17:49:45.593889 5001 scope.go:117] "RemoveContainer" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" Jan 28 17:49:45 crc kubenswrapper[5001]: E0128 17:49:45.594702 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:50:00 crc kubenswrapper[5001]: I0128 17:50:00.470367 5001 scope.go:117] "RemoveContainer" containerID="6e3b8d29fe13432be73e148d9b8fe27603f143e4dd486be7e5d7fe5a3baa25c6" Jan 28 17:50:00 crc kubenswrapper[5001]: I0128 17:50:00.530568 5001 scope.go:117] "RemoveContainer" containerID="e7e53d0054f79d61859e34f737ea5129036a6aad0096643481ddfe00c79500e1" Jan 28 17:50:00 crc kubenswrapper[5001]: I0128 17:50:00.594604 5001 scope.go:117] "RemoveContainer" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" Jan 28 17:50:01 crc kubenswrapper[5001]: I0128 17:50:01.004248 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616"} Jan 28 17:50:06 crc kubenswrapper[5001]: I0128 17:50:06.053894 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" exitCode=2 Jan 28 17:50:06 crc kubenswrapper[5001]: I0128 17:50:06.053966 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616"} Jan 28 17:50:06 crc kubenswrapper[5001]: I0128 17:50:06.054564 5001 scope.go:117] "RemoveContainer" containerID="5ed885fe3e96df86971ee2a10aed14fb16653659823b138edc54f32c70650c93" Jan 28 17:50:06 crc kubenswrapper[5001]: I0128 17:50:06.055206 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:50:06 crc kubenswrapper[5001]: E0128 17:50:06.055445 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:50:20 crc kubenswrapper[5001]: I0128 17:50:20.594793 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:50:20 crc kubenswrapper[5001]: E0128 17:50:20.595925 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:50:33 crc kubenswrapper[5001]: I0128 17:50:33.595156 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:50:33 crc kubenswrapper[5001]: E0128 17:50:33.596025 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:50:46 crc kubenswrapper[5001]: I0128 17:50:46.594234 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:50:46 crc kubenswrapper[5001]: E0128 17:50:46.595157 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:50:57 crc kubenswrapper[5001]: I0128 17:50:57.593962 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:50:57 crc kubenswrapper[5001]: E0128 17:50:57.594791 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.622613 5001 scope.go:117] "RemoveContainer" containerID="458d8261145e83b564a17a3cc33c1cb7b78a947e358ba22ded142be1e9fb319e" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.645623 5001 scope.go:117] "RemoveContainer" containerID="669d05e2117aea299f4eb92c64c1f9cafa29ac86509cc08c36358b2492568eba" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.697405 5001 scope.go:117] "RemoveContainer" containerID="a7dbc54393a470edf356a8f42c5679939bff8a013029e11c49a7d179d30c8ae9" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.732352 5001 scope.go:117] "RemoveContainer" containerID="0add5320ee9f94e72e66057cbdd5c5582c1a55e64dcd04c539201497277cdbcc" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.778826 5001 scope.go:117] "RemoveContainer" containerID="63b1a0ff0c5d0e505261573740bb2d672dfbb64dc3a2824ffbde86462238ad5a" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.825861 5001 scope.go:117] "RemoveContainer" containerID="546186fa0e66d896b475806941c2fca4ee9931096ff4285d377bcf06849f2a5f" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.850917 5001 scope.go:117] "RemoveContainer" containerID="b0fc950e6c91feb62b813953d3f483a2d8fa53d0748a257d53153e06dc05a1c1" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.866987 5001 scope.go:117] "RemoveContainer" containerID="c6abaaffd4b1a9b2261026291140d182d285151ef7a1a17f196cd8438e9013da" Jan 28 17:51:00 crc kubenswrapper[5001]: I0128 17:51:00.886093 5001 scope.go:117] "RemoveContainer" containerID="45fe20b17251dd7a5a54d6e5900d4f03a3b73cf76ff3e67ee16d0ba45d046d1c" Jan 28 17:51:09 crc kubenswrapper[5001]: I0128 17:51:09.594457 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:51:09 crc kubenswrapper[5001]: E0128 17:51:09.595520 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:51:24 crc kubenswrapper[5001]: I0128 17:51:24.600304 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:51:24 crc kubenswrapper[5001]: E0128 17:51:24.601362 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:51:34 crc kubenswrapper[5001]: I0128 17:51:34.833983 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:51:34 crc kubenswrapper[5001]: I0128 17:51:34.834506 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:51:35 crc kubenswrapper[5001]: I0128 17:51:35.594124 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:51:35 crc kubenswrapper[5001]: I0128 17:51:35.831225 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15"} Jan 28 17:51:40 crc kubenswrapper[5001]: I0128 17:51:40.869245 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" exitCode=2 Jan 28 17:51:40 crc kubenswrapper[5001]: I0128 17:51:40.869336 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15"} Jan 28 17:51:40 crc kubenswrapper[5001]: I0128 17:51:40.870250 5001 scope.go:117] "RemoveContainer" containerID="6dbfc59c3a6176da28eaad7cf74ffd9ff49d2c71563651f97c42d8f4baafe616" Jan 28 17:51:40 crc kubenswrapper[5001]: I0128 17:51:40.870885 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:51:40 crc kubenswrapper[5001]: E0128 17:51:40.871146 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:51:55 crc kubenswrapper[5001]: I0128 17:51:55.594685 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:51:55 crc kubenswrapper[5001]: E0128 17:51:55.595419 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:52:01 crc kubenswrapper[5001]: I0128 17:52:01.024107 5001 scope.go:117] "RemoveContainer" containerID="9ad494391725c1b8be0c36afe61c3950489ddbbe3e0a2fc96dc0c5aaa639a59c" Jan 28 17:52:01 crc kubenswrapper[5001]: I0128 17:52:01.080251 5001 scope.go:117] "RemoveContainer" containerID="05ba5d980bb66d25cb7cb635480f535432f5bcd805a5e4979c7950fe4724c439" Jan 28 17:52:01 crc kubenswrapper[5001]: I0128 17:52:01.116030 5001 scope.go:117] "RemoveContainer" containerID="0b2f70b84a1ee0d6e70dc0fbf9538d6596d69a1c11970e88b050d09ea9f0f9c8" Jan 28 17:52:01 crc kubenswrapper[5001]: I0128 17:52:01.164182 5001 scope.go:117] "RemoveContainer" containerID="dcf89bbdac48eae726b0ce1cbdd4f84d75f6641ba7cd5a53c61dfac3f67bd39b" Jan 28 17:52:01 crc kubenswrapper[5001]: I0128 17:52:01.215898 5001 scope.go:117] "RemoveContainer" containerID="a32f587123b9d75fa0381e0e0962056e83dd1a20f7ad2d3aca3e09231deaf98a" Jan 28 17:52:04 crc kubenswrapper[5001]: I0128 17:52:04.834249 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:52:04 crc kubenswrapper[5001]: I0128 17:52:04.834517 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:52:08 crc kubenswrapper[5001]: I0128 17:52:08.594098 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:52:08 crc kubenswrapper[5001]: E0128 17:52:08.594910 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:52:21 crc kubenswrapper[5001]: I0128 17:52:21.594729 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:52:21 crc kubenswrapper[5001]: E0128 17:52:21.595587 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:52:33 crc kubenswrapper[5001]: I0128 17:52:33.594612 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:52:33 crc kubenswrapper[5001]: E0128 17:52:33.595411 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:52:34 crc kubenswrapper[5001]: I0128 17:52:34.834373 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:52:34 crc kubenswrapper[5001]: I0128 17:52:34.834642 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:52:34 crc kubenswrapper[5001]: I0128 17:52:34.834684 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:52:34 crc kubenswrapper[5001]: I0128 17:52:34.835348 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a79c07227a7c2c6a248b779d6d6d0731af5b42260a236321116a9fb26f5132cb"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:52:34 crc kubenswrapper[5001]: I0128 17:52:34.835401 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://a79c07227a7c2c6a248b779d6d6d0731af5b42260a236321116a9fb26f5132cb" gracePeriod=600 Jan 28 17:52:35 crc kubenswrapper[5001]: I0128 17:52:35.276286 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="a79c07227a7c2c6a248b779d6d6d0731af5b42260a236321116a9fb26f5132cb" exitCode=0 Jan 28 17:52:35 crc kubenswrapper[5001]: I0128 17:52:35.276412 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"a79c07227a7c2c6a248b779d6d6d0731af5b42260a236321116a9fb26f5132cb"} Jan 28 17:52:35 crc kubenswrapper[5001]: I0128 17:52:35.276835 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3"} Jan 28 17:52:35 crc kubenswrapper[5001]: I0128 17:52:35.276938 5001 scope.go:117] "RemoveContainer" containerID="69daac4bb0855e8f17bfd6fe28e12126a74435117ddd21b376ff6cb16aa762d8" Jan 28 17:52:48 crc kubenswrapper[5001]: I0128 17:52:48.594276 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:52:48 crc kubenswrapper[5001]: E0128 17:52:48.595165 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:53:01 crc kubenswrapper[5001]: I0128 17:53:01.317942 5001 scope.go:117] "RemoveContainer" containerID="159e7a5c74518fee9b13070e1f65fa2dde0ebd3bd69cf6328280798fc68a188e" Jan 28 17:53:01 crc kubenswrapper[5001]: I0128 17:53:01.346298 5001 scope.go:117] "RemoveContainer" containerID="a4b094028e9fac812a5a896365822769d8ee142a4a1843d07567743b8ac08660" Jan 28 17:53:01 crc kubenswrapper[5001]: I0128 17:53:01.375335 5001 scope.go:117] "RemoveContainer" containerID="1c1c93db31f0de347cbce32e3e40024d64b4c0ab180d57b0198370768bebe8d8" Jan 28 17:53:02 crc kubenswrapper[5001]: I0128 17:53:02.594741 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:53:02 crc kubenswrapper[5001]: E0128 17:53:02.596340 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:53:08 crc kubenswrapper[5001]: I0128 17:53:08.293102 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-848f6f5b7c-k72xs_77f41721-97c0-4a00-83ca-c5fb40170cfa/keystone-api/0.log" Jan 28 17:53:10 crc kubenswrapper[5001]: I0128 17:53:10.923028 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_cd92fc04-2e00-4c0a-b704-204aeeb70ff1/memcached/0.log" Jan 28 17:53:11 crc kubenswrapper[5001]: I0128 17:53:11.438760 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-84be-account-create-update-rzrth_849dd9e3-fa3a-413f-909f-356d87e51427/mariadb-account-create-update/0.log" Jan 28 17:53:11 crc kubenswrapper[5001]: I0128 17:53:11.916452 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-pcn2s_b747464b-8e55-4056-a282-11d656d849dc/mariadb-database-create/0.log" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332015 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zr97h"] Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332586 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="extract-content" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332605 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="extract-content" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332618 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332625 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332649 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="extract-utilities" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332657 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="extract-utilities" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332664 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332671 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332681 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="extract-utilities" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332687 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="extract-utilities" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332705 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="extract-utilities" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332712 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="extract-utilities" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332725 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332732 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332746 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="extract-content" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332753 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="extract-content" Jan 28 17:53:12 crc kubenswrapper[5001]: E0128 17:53:12.332764 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="extract-content" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332773 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="extract-content" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332938 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecceb50b-01e9-48bf-91c1-73a33d9869f7" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.332957 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d515959-8bb6-48df-a4b1-01160e237e25" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.333006 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="827618f5-fa57-4b0e-aa43-02a94998a709" containerName="registry-server" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.334333 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.347007 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr97h"] Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.384591 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-6ca3-account-create-update-q8v42_874f309a-0607-4381-99fa-eb25d0e34f02/mariadb-account-create-update/0.log" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.426783 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-utilities\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.426829 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-catalog-content\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.427105 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pctz\" (UniqueName: \"kubernetes.io/projected/fd40026a-6940-42f1-87c8-36e7d3ba8258-kube-api-access-8pctz\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.528690 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-utilities\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.528747 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-catalog-content\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.528873 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pctz\" (UniqueName: \"kubernetes.io/projected/fd40026a-6940-42f1-87c8-36e7d3ba8258-kube-api-access-8pctz\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.529238 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-utilities\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.529279 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-catalog-content\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.553067 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pctz\" (UniqueName: \"kubernetes.io/projected/fd40026a-6940-42f1-87c8-36e7d3ba8258-kube-api-access-8pctz\") pod \"certified-operators-zr97h\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.651692 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:12 crc kubenswrapper[5001]: I0128 17:53:12.893458 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-tb26s_7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11/mariadb-database-create/0.log" Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.130838 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr97h"] Jan 28 17:53:13 crc kubenswrapper[5001]: W0128 17:53:13.134754 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd40026a_6940_42f1_87c8_36e7d3ba8258.slice/crio-20358711801d07fb84102918adcd69aa30ce47edf7634fe06972620a88182c14 WatchSource:0}: Error finding container 20358711801d07fb84102918adcd69aa30ce47edf7634fe06972620a88182c14: Status 404 returned error can't find the container with id 20358711801d07fb84102918adcd69aa30ce47edf7634fe06972620a88182c14 Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.397401 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-cda9-account-create-update-cwncp_b9c76350-9eb6-495e-874f-a186c207bea6/mariadb-account-create-update/0.log" Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.576836 5001 generic.go:334] "Generic (PLEG): container finished" podID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerID="4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be" exitCode=0 Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.576906 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerDied","Data":"4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be"} Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.577083 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerStarted","Data":"20358711801d07fb84102918adcd69aa30ce47edf7634fe06972620a88182c14"} Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.578957 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.594852 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:53:13 crc kubenswrapper[5001]: E0128 17:53:13.595162 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:53:13 crc kubenswrapper[5001]: I0128 17:53:13.876722 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-5gvgn_d09009c8-2bc4-4a46-8243-9f226a5aa244/mariadb-database-create/0.log" Jan 28 17:53:14 crc kubenswrapper[5001]: I0128 17:53:14.389988 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_8bb877d2-6693-4274-a705-6551fe435fb2/nova-kuttl-api-log/0.log" Jan 28 17:53:14 crc kubenswrapper[5001]: I0128 17:53:14.586527 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerStarted","Data":"ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107"} Jan 28 17:53:14 crc kubenswrapper[5001]: I0128 17:53:14.828629 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-nsw78_c91ec979-f8eb-45b2-af41-a2040b954d89/nova-manage/0.log" Jan 28 17:53:15 crc kubenswrapper[5001]: I0128 17:53:15.278560 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_99fae513-1f96-42f3-9e69-e55b82c047dc/nova-kuttl-cell0-conductor-conductor/0.log" Jan 28 17:53:15 crc kubenswrapper[5001]: I0128 17:53:15.606934 5001 generic.go:334] "Generic (PLEG): container finished" podID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerID="ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107" exitCode=0 Jan 28 17:53:15 crc kubenswrapper[5001]: I0128 17:53:15.607008 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerDied","Data":"ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107"} Jan 28 17:53:15 crc kubenswrapper[5001]: I0128 17:53:15.730061 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-dvstz_7f16db31-239d-4a00-8c6d-e50c10fbf407/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 28 17:53:16 crc kubenswrapper[5001]: I0128 17:53:16.176249 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-tvxst_44595eef-a540-442d-8c7a-5f8bd2f2488c/nova-manage/5.log" Jan 28 17:53:16 crc kubenswrapper[5001]: I0128 17:53:16.642116 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-7zhhm_19d576b9-5be5-4988-a627-4d6b96e55a64/nova-manage/0.log" Jan 28 17:53:17 crc kubenswrapper[5001]: I0128 17:53:17.172891 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_95e53dae-9c1d-442b-b282-a377730ba93a/nova-kuttl-cell1-conductor-conductor/0.log" Jan 28 17:53:17 crc kubenswrapper[5001]: I0128 17:53:17.625265 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerStarted","Data":"fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06"} Jan 28 17:53:17 crc kubenswrapper[5001]: I0128 17:53:17.635964 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-p5mk9_b6c2af0c-5c66-4f40-b9b4-10b4efec408a/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 28 17:53:17 crc kubenswrapper[5001]: I0128 17:53:17.647642 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zr97h" podStartSLOduration=2.766481116 podStartE2EDuration="5.647618887s" podCreationTimestamp="2026-01-28 17:53:12 +0000 UTC" firstStartedPulling="2026-01-28 17:53:13.57862741 +0000 UTC m=+2239.746415640" lastFinishedPulling="2026-01-28 17:53:16.459765181 +0000 UTC m=+2242.627553411" observedRunningTime="2026-01-28 17:53:17.641600983 +0000 UTC m=+2243.809389233" watchObservedRunningTime="2026-01-28 17:53:17.647618887 +0000 UTC m=+2243.815407117" Jan 28 17:53:18 crc kubenswrapper[5001]: I0128 17:53:18.057773 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_1b94ad6a-ac77-455c-a73f-a9a047f5d714/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 28 17:53:18 crc kubenswrapper[5001]: I0128 17:53:18.521862 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_cdc03496-6661-435e-ae7f-c20ba5e7b381/nova-kuttl-metadata-log/0.log" Jan 28 17:53:18 crc kubenswrapper[5001]: I0128 17:53:18.961355 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_7e25214b-bf18-4e49-82d6-53519f9b2ccd/nova-kuttl-scheduler-scheduler/0.log" Jan 28 17:53:19 crc kubenswrapper[5001]: I0128 17:53:19.421497 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_27cec822-2561-4682-bb1c-3fe4fd0805f4/galera/0.log" Jan 28 17:53:19 crc kubenswrapper[5001]: I0128 17:53:19.882065 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_4efb190a-f4c2-4761-9fab-7e26fc702121/galera/0.log" Jan 28 17:53:20 crc kubenswrapper[5001]: I0128 17:53:20.365046 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_535ec83f-ed6c-460b-8369-d710976d266f/openstackclient/0.log" Jan 28 17:53:20 crc kubenswrapper[5001]: I0128 17:53:20.830087 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-b8bc4c9f4-htpjw_85ac1e3b-7a2c-4c90-adcb-34d4efd01f41/placement-log/0.log" Jan 28 17:53:21 crc kubenswrapper[5001]: I0128 17:53:21.279665 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_bdcddc03-22a9-44af-8c79-67fe309358ec/rabbitmq/0.log" Jan 28 17:53:21 crc kubenswrapper[5001]: I0128 17:53:21.793924 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_1bc7adbb-0250-4320-98f4-7a0a69b77724/rabbitmq/0.log" Jan 28 17:53:22 crc kubenswrapper[5001]: I0128 17:53:22.335086 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_420bd810-85d0-4ced-bcd8-3ae62c8c79e4/rabbitmq/0.log" Jan 28 17:53:22 crc kubenswrapper[5001]: I0128 17:53:22.651984 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:22 crc kubenswrapper[5001]: I0128 17:53:22.652043 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:22 crc kubenswrapper[5001]: I0128 17:53:22.695844 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:22 crc kubenswrapper[5001]: I0128 17:53:22.737249 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:24 crc kubenswrapper[5001]: I0128 17:53:24.521413 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr97h"] Jan 28 17:53:24 crc kubenswrapper[5001]: I0128 17:53:24.670479 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zr97h" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="registry-server" containerID="cri-o://fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06" gracePeriod=2 Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.178654 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.286109 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-utilities\") pod \"fd40026a-6940-42f1-87c8-36e7d3ba8258\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.286445 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pctz\" (UniqueName: \"kubernetes.io/projected/fd40026a-6940-42f1-87c8-36e7d3ba8258-kube-api-access-8pctz\") pod \"fd40026a-6940-42f1-87c8-36e7d3ba8258\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.286512 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-catalog-content\") pod \"fd40026a-6940-42f1-87c8-36e7d3ba8258\" (UID: \"fd40026a-6940-42f1-87c8-36e7d3ba8258\") " Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.287174 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-utilities" (OuterVolumeSpecName: "utilities") pod "fd40026a-6940-42f1-87c8-36e7d3ba8258" (UID: "fd40026a-6940-42f1-87c8-36e7d3ba8258"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.292934 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd40026a-6940-42f1-87c8-36e7d3ba8258-kube-api-access-8pctz" (OuterVolumeSpecName: "kube-api-access-8pctz") pod "fd40026a-6940-42f1-87c8-36e7d3ba8258" (UID: "fd40026a-6940-42f1-87c8-36e7d3ba8258"). InnerVolumeSpecName "kube-api-access-8pctz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.339991 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd40026a-6940-42f1-87c8-36e7d3ba8258" (UID: "fd40026a-6940-42f1-87c8-36e7d3ba8258"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.388352 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.388399 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd40026a-6940-42f1-87c8-36e7d3ba8258-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.388414 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pctz\" (UniqueName: \"kubernetes.io/projected/fd40026a-6940-42f1-87c8-36e7d3ba8258-kube-api-access-8pctz\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.696214 5001 generic.go:334] "Generic (PLEG): container finished" podID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerID="fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06" exitCode=0 Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.696259 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerDied","Data":"fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06"} Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.696314 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr97h" event={"ID":"fd40026a-6940-42f1-87c8-36e7d3ba8258","Type":"ContainerDied","Data":"20358711801d07fb84102918adcd69aa30ce47edf7634fe06972620a88182c14"} Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.696338 5001 scope.go:117] "RemoveContainer" containerID="fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.696352 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr97h" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.743872 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr97h"] Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.746252 5001 scope.go:117] "RemoveContainer" containerID="ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.753186 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zr97h"] Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.766289 5001 scope.go:117] "RemoveContainer" containerID="4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.790719 5001 scope.go:117] "RemoveContainer" containerID="fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06" Jan 28 17:53:27 crc kubenswrapper[5001]: E0128 17:53:27.791181 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06\": container with ID starting with fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06 not found: ID does not exist" containerID="fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.791226 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06"} err="failed to get container status \"fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06\": rpc error: code = NotFound desc = could not find container \"fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06\": container with ID starting with fa7c8ab94acbcd38eb00140be0bb7c14460d4856c429a0c3721feaab8fc3cd06 not found: ID does not exist" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.791257 5001 scope.go:117] "RemoveContainer" containerID="ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107" Jan 28 17:53:27 crc kubenswrapper[5001]: E0128 17:53:27.791605 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107\": container with ID starting with ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107 not found: ID does not exist" containerID="ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.791633 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107"} err="failed to get container status \"ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107\": rpc error: code = NotFound desc = could not find container \"ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107\": container with ID starting with ea948c940ce61b3bb117c98c7b24faef5ee96afc34f660ee4ca92edad0f2a107 not found: ID does not exist" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.791650 5001 scope.go:117] "RemoveContainer" containerID="4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be" Jan 28 17:53:27 crc kubenswrapper[5001]: E0128 17:53:27.791846 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be\": container with ID starting with 4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be not found: ID does not exist" containerID="4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be" Jan 28 17:53:27 crc kubenswrapper[5001]: I0128 17:53:27.791865 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be"} err="failed to get container status \"4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be\": rpc error: code = NotFound desc = could not find container \"4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be\": container with ID starting with 4fef72c09d932f9a09d6d531c1a5ccb0bfd6a2ecdae06a5a50fafe46761a63be not found: ID does not exist" Jan 28 17:53:28 crc kubenswrapper[5001]: I0128 17:53:28.594416 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:53:28 crc kubenswrapper[5001]: E0128 17:53:28.594752 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:53:28 crc kubenswrapper[5001]: I0128 17:53:28.603100 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" path="/var/lib/kubelet/pods/fd40026a-6940-42f1-87c8-36e7d3ba8258/volumes" Jan 28 17:53:39 crc kubenswrapper[5001]: I0128 17:53:39.593787 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:53:39 crc kubenswrapper[5001]: E0128 17:53:39.594689 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:53:52 crc kubenswrapper[5001]: I0128 17:53:52.192290 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/extract/0.log" Jan 28 17:53:52 crc kubenswrapper[5001]: I0128 17:53:52.595268 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:53:52 crc kubenswrapper[5001]: E0128 17:53:52.595745 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:53:52 crc kubenswrapper[5001]: I0128 17:53:52.644579 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/extract/0.log" Jan 28 17:53:53 crc kubenswrapper[5001]: I0128 17:53:53.049905 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-jvbp8_842efc7d-25a3-4383-9ca0-a3d2e101990a/manager/0.log" Jan 28 17:53:53 crc kubenswrapper[5001]: I0128 17:53:53.480657 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-pvh7q_44247ccc-08d6-4c04-ae14-7595add07217/manager/0.log" Jan 28 17:53:53 crc kubenswrapper[5001]: I0128 17:53:53.862060 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-jncbk_cff5af3e-db62-41be-b49c-8df7ea7a015a/manager/0.log" Jan 28 17:53:54 crc kubenswrapper[5001]: I0128 17:53:54.242683 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-cv7jq_1f3f3b33-d586-448c-a967-fcd03c6fb11d/manager/0.log" Jan 28 17:53:54 crc kubenswrapper[5001]: I0128 17:53:54.617326 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-rpd8k_f3751f38-96d6-42a8-98da-05cfbd294fb5/manager/0.log" Jan 28 17:53:55 crc kubenswrapper[5001]: I0128 17:53:55.025519 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-dt2m9_0311571f-c23c-4554-8763-a3daced65fc8/manager/0.log" Jan 28 17:53:55 crc kubenswrapper[5001]: I0128 17:53:55.515374 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-jm966_42016b42-c753-4265-9902-c2969117ad64/manager/0.log" Jan 28 17:53:55 crc kubenswrapper[5001]: I0128 17:53:55.967857 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-twmcs_77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d/manager/0.log" Jan 28 17:53:56 crc kubenswrapper[5001]: I0128 17:53:56.476534 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-p6lff_0d72532d-5aac-40f4-b308-4ac21a287e81/manager/0.log" Jan 28 17:53:56 crc kubenswrapper[5001]: I0128 17:53:56.903455 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-qrfs7_2049f024-4549-43c1-b3ea-c42b38ade539/manager/0.log" Jan 28 17:53:57 crc kubenswrapper[5001]: I0128 17:53:57.325384 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-6kwks_660281b0-2db3-4f96-a8c5-69c0ca0a5072/manager/0.log" Jan 28 17:53:57 crc kubenswrapper[5001]: I0128 17:53:57.726541 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-c44sh_730865cc-5b68-4c45-927b-8a5fee90c539/manager/0.log" Jan 28 17:53:58 crc kubenswrapper[5001]: I0128 17:53:58.488493 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5988f4bb96-bmlnr_17fbd35a-c51b-4e33-b257-e8d10c67054c/manager/0.log" Jan 28 17:53:58 crc kubenswrapper[5001]: I0128 17:53:58.953597 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-lqskc_9780fbb4-beca-4c12-a4ca-b90e06fb59ee/registry-server/0.log" Jan 28 17:53:59 crc kubenswrapper[5001]: I0128 17:53:59.346837 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-l44l6_c394fabc-a9e5-4e6b-81bb-511228e8c0fb/manager/0.log" Jan 28 17:53:59 crc kubenswrapper[5001]: I0128 17:53:59.749830 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854689x6_ee5300e4-6c64-4919-9ac3-1e8a9779abc3/manager/0.log" Jan 28 17:54:00 crc kubenswrapper[5001]: I0128 17:54:00.469549 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-556755cfd4-d79zz_73f5ff01-c3cc-4fa1-b265-09a6716a24a5/manager/0.log" Jan 28 17:54:00 crc kubenswrapper[5001]: I0128 17:54:00.961087 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-49ldn_13e9dd0f-e7c5-4959-9554-bf34549222cf/registry-server/0.log" Jan 28 17:54:01 crc kubenswrapper[5001]: I0128 17:54:01.460155 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-ll4sm_f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26/manager/0.log" Jan 28 17:54:01 crc kubenswrapper[5001]: I0128 17:54:01.904138 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-s9l5b_a00c19fe-2da2-45ce-81b6-a32c17bbb1e7/manager/0.log" Jan 28 17:54:02 crc kubenswrapper[5001]: I0128 17:54:02.309047 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dlw8k_95ef1fe7-914c-4c1e-9468-636a81ec6cce/operator/0.log" Jan 28 17:54:02 crc kubenswrapper[5001]: I0128 17:54:02.751676 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-92rdm_95fa542e-01b1-4cd6-878e-7afba27a9e5f/manager/0.log" Jan 28 17:54:03 crc kubenswrapper[5001]: I0128 17:54:03.112707 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-647dx_1ea33ae1-a3ae-4f47-b28d-166e582f8b83/manager/0.log" Jan 28 17:54:03 crc kubenswrapper[5001]: I0128 17:54:03.537576 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-djkj7_9ab5c237-6fba-4123-bbdd-051d9519d4fa/manager/0.log" Jan 28 17:54:03 crc kubenswrapper[5001]: I0128 17:54:03.595077 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:54:03 crc kubenswrapper[5001]: E0128 17:54:03.595619 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:54:03 crc kubenswrapper[5001]: I0128 17:54:03.971770 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-np74j_38bc9590-ffbd-4924-90aa-c24a44a29bd7/manager/0.log" Jan 28 17:54:09 crc kubenswrapper[5001]: I0128 17:54:09.034489 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-848f6f5b7c-k72xs_77f41721-97c0-4a00-83ca-c5fb40170cfa/keystone-api/0.log" Jan 28 17:54:11 crc kubenswrapper[5001]: I0128 17:54:11.807868 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_cd92fc04-2e00-4c0a-b704-204aeeb70ff1/memcached/0.log" Jan 28 17:54:12 crc kubenswrapper[5001]: I0128 17:54:12.346408 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-84be-account-create-update-rzrth_849dd9e3-fa3a-413f-909f-356d87e51427/mariadb-account-create-update/0.log" Jan 28 17:54:12 crc kubenswrapper[5001]: I0128 17:54:12.888099 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-pcn2s_b747464b-8e55-4056-a282-11d656d849dc/mariadb-database-create/0.log" Jan 28 17:54:13 crc kubenswrapper[5001]: I0128 17:54:13.428165 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-6ca3-account-create-update-q8v42_874f309a-0607-4381-99fa-eb25d0e34f02/mariadb-account-create-update/0.log" Jan 28 17:54:13 crc kubenswrapper[5001]: I0128 17:54:13.936683 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-tb26s_7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11/mariadb-database-create/0.log" Jan 28 17:54:14 crc kubenswrapper[5001]: I0128 17:54:14.479070 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-cda9-account-create-update-cwncp_b9c76350-9eb6-495e-874f-a186c207bea6/mariadb-account-create-update/0.log" Jan 28 17:54:14 crc kubenswrapper[5001]: I0128 17:54:14.598993 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:54:14 crc kubenswrapper[5001]: E0128 17:54:14.599238 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-tvxst_nova-kuttl-default(44595eef-a540-442d-8c7a-5f8bd2f2488c)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" Jan 28 17:54:14 crc kubenswrapper[5001]: I0128 17:54:14.935060 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-5gvgn_d09009c8-2bc4-4a46-8243-9f226a5aa244/mariadb-database-create/0.log" Jan 28 17:54:15 crc kubenswrapper[5001]: I0128 17:54:15.411415 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_8bb877d2-6693-4274-a705-6551fe435fb2/nova-kuttl-api-log/0.log" Jan 28 17:54:15 crc kubenswrapper[5001]: I0128 17:54:15.868817 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-nsw78_c91ec979-f8eb-45b2-af41-a2040b954d89/nova-manage/0.log" Jan 28 17:54:16 crc kubenswrapper[5001]: I0128 17:54:16.354812 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_99fae513-1f96-42f3-9e69-e55b82c047dc/nova-kuttl-cell0-conductor-conductor/0.log" Jan 28 17:54:16 crc kubenswrapper[5001]: I0128 17:54:16.789082 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-dvstz_7f16db31-239d-4a00-8c6d-e50c10fbf407/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 28 17:54:17 crc kubenswrapper[5001]: I0128 17:54:17.174030 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-tvxst_44595eef-a540-442d-8c7a-5f8bd2f2488c/nova-manage/5.log" Jan 28 17:54:17 crc kubenswrapper[5001]: I0128 17:54:17.565551 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-7zhhm_19d576b9-5be5-4988-a627-4d6b96e55a64/nova-manage/0.log" Jan 28 17:54:18 crc kubenswrapper[5001]: I0128 17:54:18.063030 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_95e53dae-9c1d-442b-b282-a377730ba93a/nova-kuttl-cell1-conductor-conductor/0.log" Jan 28 17:54:18 crc kubenswrapper[5001]: I0128 17:54:18.486180 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-p5mk9_b6c2af0c-5c66-4f40-b9b4-10b4efec408a/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 28 17:54:18 crc kubenswrapper[5001]: I0128 17:54:18.893468 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_1b94ad6a-ac77-455c-a73f-a9a047f5d714/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 28 17:54:19 crc kubenswrapper[5001]: I0128 17:54:19.403466 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_cdc03496-6661-435e-ae7f-c20ba5e7b381/nova-kuttl-metadata-log/0.log" Jan 28 17:54:19 crc kubenswrapper[5001]: I0128 17:54:19.834726 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_7e25214b-bf18-4e49-82d6-53519f9b2ccd/nova-kuttl-scheduler-scheduler/0.log" Jan 28 17:54:20 crc kubenswrapper[5001]: I0128 17:54:20.301416 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_27cec822-2561-4682-bb1c-3fe4fd0805f4/galera/0.log" Jan 28 17:54:20 crc kubenswrapper[5001]: I0128 17:54:20.805183 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_4efb190a-f4c2-4761-9fab-7e26fc702121/galera/0.log" Jan 28 17:54:21 crc kubenswrapper[5001]: I0128 17:54:21.286241 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_535ec83f-ed6c-460b-8369-d710976d266f/openstackclient/0.log" Jan 28 17:54:21 crc kubenswrapper[5001]: I0128 17:54:21.744878 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-b8bc4c9f4-htpjw_85ac1e3b-7a2c-4c90-adcb-34d4efd01f41/placement-log/0.log" Jan 28 17:54:22 crc kubenswrapper[5001]: I0128 17:54:22.198324 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_bdcddc03-22a9-44af-8c79-67fe309358ec/rabbitmq/0.log" Jan 28 17:54:22 crc kubenswrapper[5001]: I0128 17:54:22.661418 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_1bc7adbb-0250-4320-98f4-7a0a69b77724/rabbitmq/0.log" Jan 28 17:54:23 crc kubenswrapper[5001]: I0128 17:54:23.110093 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_420bd810-85d0-4ced-bcd8-3ae62c8c79e4/rabbitmq/0.log" Jan 28 17:54:26 crc kubenswrapper[5001]: I0128 17:54:26.594196 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:54:27 crc kubenswrapper[5001]: I0128 17:54:27.228256 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerStarted","Data":"7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c"} Jan 28 17:54:28 crc kubenswrapper[5001]: I0128 17:54:28.259931 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst"] Jan 28 17:54:28 crc kubenswrapper[5001]: I0128 17:54:28.260467 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" containerID="cri-o://7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c" gracePeriod=30 Jan 28 17:54:31 crc kubenswrapper[5001]: I0128 17:54:31.883623 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.077305 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-scripts\") pod \"44595eef-a540-442d-8c7a-5f8bd2f2488c\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.077439 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbqv2\" (UniqueName: \"kubernetes.io/projected/44595eef-a540-442d-8c7a-5f8bd2f2488c-kube-api-access-cbqv2\") pod \"44595eef-a540-442d-8c7a-5f8bd2f2488c\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.077534 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-config-data\") pod \"44595eef-a540-442d-8c7a-5f8bd2f2488c\" (UID: \"44595eef-a540-442d-8c7a-5f8bd2f2488c\") " Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.082168 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-scripts" (OuterVolumeSpecName: "scripts") pod "44595eef-a540-442d-8c7a-5f8bd2f2488c" (UID: "44595eef-a540-442d-8c7a-5f8bd2f2488c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.082280 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44595eef-a540-442d-8c7a-5f8bd2f2488c-kube-api-access-cbqv2" (OuterVolumeSpecName: "kube-api-access-cbqv2") pod "44595eef-a540-442d-8c7a-5f8bd2f2488c" (UID: "44595eef-a540-442d-8c7a-5f8bd2f2488c"). InnerVolumeSpecName "kube-api-access-cbqv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.098271 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-config-data" (OuterVolumeSpecName: "config-data") pod "44595eef-a540-442d-8c7a-5f8bd2f2488c" (UID: "44595eef-a540-442d-8c7a-5f8bd2f2488c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.179695 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbqv2\" (UniqueName: \"kubernetes.io/projected/44595eef-a540-442d-8c7a-5f8bd2f2488c-kube-api-access-cbqv2\") on node \"crc\" DevicePath \"\"" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.179742 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.179754 5001 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44595eef-a540-442d-8c7a-5f8bd2f2488c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.266571 5001 generic.go:334] "Generic (PLEG): container finished" podID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerID="7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c" exitCode=2 Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.266607 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.266619 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c"} Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.266658 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst" event={"ID":"44595eef-a540-442d-8c7a-5f8bd2f2488c","Type":"ContainerDied","Data":"be5b666b26347840f99268b8cb544f3ba0821401199e6689e359a938b6c273da"} Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.266678 5001 scope.go:117] "RemoveContainer" containerID="7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.286721 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.298012 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst"] Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.303837 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-tvxst"] Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.325339 5001 scope.go:117] "RemoveContainer" containerID="7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c" Jan 28 17:54:32 crc kubenswrapper[5001]: E0128 17:54:32.325700 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c\": container with ID starting with 7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c not found: ID does not exist" containerID="7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.325730 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c"} err="failed to get container status \"7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c\": rpc error: code = NotFound desc = could not find container \"7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c\": container with ID starting with 7ad33e2d9526d976f3083fd24e1960b70bb4ad8c46736106d3a03334ad9af86c not found: ID does not exist" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.325749 5001 scope.go:117] "RemoveContainer" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:54:32 crc kubenswrapper[5001]: E0128 17:54:32.326316 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15\": container with ID starting with 190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15 not found: ID does not exist" containerID="190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.326340 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15"} err="failed to get container status \"190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15\": rpc error: code = NotFound desc = could not find container \"190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15\": container with ID starting with 190dc8fcc869c1af026d98216888f64167cf527db7d80e0808f17b34ba2dcf15 not found: ID does not exist" Jan 28 17:54:32 crc kubenswrapper[5001]: I0128 17:54:32.605379 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" path="/var/lib/kubelet/pods/44595eef-a540-442d-8c7a-5f8bd2f2488c/volumes" Jan 28 17:54:53 crc kubenswrapper[5001]: I0128 17:54:53.820578 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/extract/0.log" Jan 28 17:54:54 crc kubenswrapper[5001]: I0128 17:54:54.218201 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/extract/0.log" Jan 28 17:54:54 crc kubenswrapper[5001]: I0128 17:54:54.608373 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-jvbp8_842efc7d-25a3-4383-9ca0-a3d2e101990a/manager/0.log" Jan 28 17:54:55 crc kubenswrapper[5001]: I0128 17:54:55.037066 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-pvh7q_44247ccc-08d6-4c04-ae14-7595add07217/manager/0.log" Jan 28 17:54:55 crc kubenswrapper[5001]: I0128 17:54:55.468911 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-jncbk_cff5af3e-db62-41be-b49c-8df7ea7a015a/manager/0.log" Jan 28 17:54:55 crc kubenswrapper[5001]: I0128 17:54:55.855332 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-cv7jq_1f3f3b33-d586-448c-a967-fcd03c6fb11d/manager/0.log" Jan 28 17:54:56 crc kubenswrapper[5001]: I0128 17:54:56.283938 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-rpd8k_f3751f38-96d6-42a8-98da-05cfbd294fb5/manager/0.log" Jan 28 17:54:56 crc kubenswrapper[5001]: I0128 17:54:56.686882 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-dt2m9_0311571f-c23c-4554-8763-a3daced65fc8/manager/0.log" Jan 28 17:54:57 crc kubenswrapper[5001]: I0128 17:54:57.227523 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-jm966_42016b42-c753-4265-9902-c2969117ad64/manager/0.log" Jan 28 17:54:57 crc kubenswrapper[5001]: I0128 17:54:57.645255 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-twmcs_77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d/manager/0.log" Jan 28 17:54:58 crc kubenswrapper[5001]: I0128 17:54:58.094262 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-p6lff_0d72532d-5aac-40f4-b308-4ac21a287e81/manager/0.log" Jan 28 17:54:58 crc kubenswrapper[5001]: I0128 17:54:58.521519 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-qrfs7_2049f024-4549-43c1-b3ea-c42b38ade539/manager/0.log" Jan 28 17:54:58 crc kubenswrapper[5001]: I0128 17:54:58.962235 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-6kwks_660281b0-2db3-4f96-a8c5-69c0ca0a5072/manager/0.log" Jan 28 17:54:59 crc kubenswrapper[5001]: I0128 17:54:59.404308 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-c44sh_730865cc-5b68-4c45-927b-8a5fee90c539/manager/0.log" Jan 28 17:55:00 crc kubenswrapper[5001]: I0128 17:55:00.179885 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5988f4bb96-bmlnr_17fbd35a-c51b-4e33-b257-e8d10c67054c/manager/0.log" Jan 28 17:55:00 crc kubenswrapper[5001]: I0128 17:55:00.582609 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-lqskc_9780fbb4-beca-4c12-a4ca-b90e06fb59ee/registry-server/0.log" Jan 28 17:55:00 crc kubenswrapper[5001]: I0128 17:55:00.980681 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-l44l6_c394fabc-a9e5-4e6b-81bb-511228e8c0fb/manager/0.log" Jan 28 17:55:01 crc kubenswrapper[5001]: I0128 17:55:01.370436 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854689x6_ee5300e4-6c64-4919-9ac3-1e8a9779abc3/manager/0.log" Jan 28 17:55:02 crc kubenswrapper[5001]: I0128 17:55:02.126009 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-556755cfd4-d79zz_73f5ff01-c3cc-4fa1-b265-09a6716a24a5/manager/0.log" Jan 28 17:55:02 crc kubenswrapper[5001]: I0128 17:55:02.540152 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-49ldn_13e9dd0f-e7c5-4959-9554-bf34549222cf/registry-server/0.log" Jan 28 17:55:02 crc kubenswrapper[5001]: I0128 17:55:02.969512 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-ll4sm_f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26/manager/0.log" Jan 28 17:55:03 crc kubenswrapper[5001]: I0128 17:55:03.372258 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-s9l5b_a00c19fe-2da2-45ce-81b6-a32c17bbb1e7/manager/0.log" Jan 28 17:55:03 crc kubenswrapper[5001]: I0128 17:55:03.779473 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dlw8k_95ef1fe7-914c-4c1e-9468-636a81ec6cce/operator/0.log" Jan 28 17:55:04 crc kubenswrapper[5001]: I0128 17:55:04.184389 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-92rdm_95fa542e-01b1-4cd6-878e-7afba27a9e5f/manager/0.log" Jan 28 17:55:04 crc kubenswrapper[5001]: I0128 17:55:04.595403 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-647dx_1ea33ae1-a3ae-4f47-b28d-166e582f8b83/manager/0.log" Jan 28 17:55:04 crc kubenswrapper[5001]: I0128 17:55:04.834778 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:55:04 crc kubenswrapper[5001]: I0128 17:55:04.834863 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:55:04 crc kubenswrapper[5001]: I0128 17:55:04.972000 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-djkj7_9ab5c237-6fba-4123-bbdd-051d9519d4fa/manager/0.log" Jan 28 17:55:05 crc kubenswrapper[5001]: I0128 17:55:05.373946 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-np74j_38bc9590-ffbd-4924-90aa-c24a44a29bd7/manager/0.log" Jan 28 17:55:34 crc kubenswrapper[5001]: I0128 17:55:34.834712 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:55:34 crc kubenswrapper[5001]: I0128 17:55:34.835318 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.238637 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pjnn8/must-gather-44722"] Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239250 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239270 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239283 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239294 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239313 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="extract-content" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239321 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="extract-content" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239335 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="registry-server" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239343 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="registry-server" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239355 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239362 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239372 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239381 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239405 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="extract-utilities" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239414 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="extract-utilities" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239587 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239598 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239607 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239631 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239641 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd40026a-6940-42f1-87c8-36e7d3ba8258" containerName="registry-server" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239837 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239847 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: E0128 17:55:35.239862 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.239870 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.240054 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.240065 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.240077 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.240671 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.242574 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pjnn8"/"openshift-service-ca.crt" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.243197 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pjnn8"/"default-dockercfg-54kjg" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.248284 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pjnn8"/"kube-root-ca.crt" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.262608 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pjnn8/must-gather-44722"] Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.388276 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-must-gather-output\") pod \"must-gather-44722\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.388480 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dkw\" (UniqueName: \"kubernetes.io/projected/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-kube-api-access-z6dkw\") pod \"must-gather-44722\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.490571 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-must-gather-output\") pod \"must-gather-44722\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.490658 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dkw\" (UniqueName: \"kubernetes.io/projected/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-kube-api-access-z6dkw\") pod \"must-gather-44722\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.491145 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-must-gather-output\") pod \"must-gather-44722\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.512876 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dkw\" (UniqueName: \"kubernetes.io/projected/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-kube-api-access-z6dkw\") pod \"must-gather-44722\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:35 crc kubenswrapper[5001]: I0128 17:55:35.560235 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 17:55:36 crc kubenswrapper[5001]: I0128 17:55:36.002072 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pjnn8/must-gather-44722"] Jan 28 17:55:36 crc kubenswrapper[5001]: I0128 17:55:36.810166 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pjnn8/must-gather-44722" event={"ID":"4cec17ac-482a-4e10-9a55-6f61b3b3eddf","Type":"ContainerStarted","Data":"120ef95fc76c92923079526c03f32f5a3098d37f26b328ca3f0a4e54c76878c1"} Jan 28 17:55:43 crc kubenswrapper[5001]: I0128 17:55:43.906154 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pjnn8/must-gather-44722" event={"ID":"4cec17ac-482a-4e10-9a55-6f61b3b3eddf","Type":"ContainerStarted","Data":"8e2d92b634a95a2ff55c95ae23ee7f5e1315173d6c329b5ae26a9c39b93bfdeb"} Jan 28 17:55:43 crc kubenswrapper[5001]: I0128 17:55:43.906692 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pjnn8/must-gather-44722" event={"ID":"4cec17ac-482a-4e10-9a55-6f61b3b3eddf","Type":"ContainerStarted","Data":"0c5c93da5361b264e7e73d15ef19f3f0cfcf8fd44dcc38da47a1dea8dc751bcc"} Jan 28 17:55:43 crc kubenswrapper[5001]: I0128 17:55:43.920375 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pjnn8/must-gather-44722" podStartSLOduration=1.7215977869999999 podStartE2EDuration="8.92036228s" podCreationTimestamp="2026-01-28 17:55:35 +0000 UTC" firstStartedPulling="2026-01-28 17:55:36.009052894 +0000 UTC m=+2382.176841124" lastFinishedPulling="2026-01-28 17:55:43.207817387 +0000 UTC m=+2389.375605617" observedRunningTime="2026-01-28 17:55:43.918718513 +0000 UTC m=+2390.086506743" watchObservedRunningTime="2026-01-28 17:55:43.92036228 +0000 UTC m=+2390.088150510" Jan 28 17:56:04 crc kubenswrapper[5001]: I0128 17:56:04.834728 5001 patch_prober.go:28] interesting pod/machine-config-daemon-mqgwk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:56:04 crc kubenswrapper[5001]: I0128 17:56:04.835310 5001 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:56:04 crc kubenswrapper[5001]: I0128 17:56:04.835357 5001 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" Jan 28 17:56:04 crc kubenswrapper[5001]: I0128 17:56:04.836006 5001 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3"} pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:56:04 crc kubenswrapper[5001]: I0128 17:56:04.836072 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerName="machine-config-daemon" containerID="cri-o://e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" gracePeriod=600 Jan 28 17:56:04 crc kubenswrapper[5001]: E0128 17:56:04.961061 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:56:05 crc kubenswrapper[5001]: I0128 17:56:05.084035 5001 generic.go:334] "Generic (PLEG): container finished" podID="8de2d052-6f7c-4345-91fa-ba2fc7532251" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" exitCode=0 Jan 28 17:56:05 crc kubenswrapper[5001]: I0128 17:56:05.084080 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerDied","Data":"e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3"} Jan 28 17:56:05 crc kubenswrapper[5001]: I0128 17:56:05.084111 5001 scope.go:117] "RemoveContainer" containerID="a79c07227a7c2c6a248b779d6d6d0731af5b42260a236321116a9fb26f5132cb" Jan 28 17:56:05 crc kubenswrapper[5001]: I0128 17:56:05.084749 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:56:05 crc kubenswrapper[5001]: E0128 17:56:05.085010 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:56:17 crc kubenswrapper[5001]: I0128 17:56:17.594480 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:56:17 crc kubenswrapper[5001]: E0128 17:56:17.595199 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:56:29 crc kubenswrapper[5001]: I0128 17:56:29.594108 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:56:29 crc kubenswrapper[5001]: E0128 17:56:29.594903 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:56:40 crc kubenswrapper[5001]: I0128 17:56:40.594438 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:56:40 crc kubenswrapper[5001]: E0128 17:56:40.596323 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:56:41 crc kubenswrapper[5001]: I0128 17:56:41.922826 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/util/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.119085 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/pull/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.123019 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/util/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.147351 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/pull/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.279964 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/pull/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.282227 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/util/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.325624 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2e373f14bfbb92ac87633f1df95095666f6a05fa91e6402190697f693djzkv8_93ddfb0d-4440-4560-b5c5-3e252576ef02/extract/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.457585 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/util/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.620585 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/util/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.642571 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/pull/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.645730 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/pull/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.832630 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/extract/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.845418 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/util/0.log" Jan 28 17:56:42 crc kubenswrapper[5001]: I0128 17:56:42.870281 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b57561707f1ef3f631ca3afc02a43f3685c80f1e44e7ee03860f1d0047jft4w_768cda87-43fb-49e7-a591-d7f0216e2683/pull/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.019087 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-jvbp8_842efc7d-25a3-4383-9ca0-a3d2e101990a/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.065540 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-pvh7q_44247ccc-08d6-4c04-ae14-7595add07217/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.186185 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-jncbk_cff5af3e-db62-41be-b49c-8df7ea7a015a/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.248241 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-cv7jq_1f3f3b33-d586-448c-a967-fcd03c6fb11d/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.382191 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-rpd8k_f3751f38-96d6-42a8-98da-05cfbd294fb5/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.446608 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-dt2m9_0311571f-c23c-4554-8763-a3daced65fc8/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.624043 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-twmcs_77b3a8a4-addb-4f1c-95e5-8ad4b54ebf7d/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.660092 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-jm966_42016b42-c753-4265-9902-c2969117ad64/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.827960 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-p6lff_0d72532d-5aac-40f4-b308-4ac21a287e81/manager/0.log" Jan 28 17:56:43 crc kubenswrapper[5001]: I0128 17:56:43.861559 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-qrfs7_2049f024-4549-43c1-b3ea-c42b38ade539/manager/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.045556 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-6kwks_660281b0-2db3-4f96-a8c5-69c0ca0a5072/manager/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.095637 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-c44sh_730865cc-5b68-4c45-927b-8a5fee90c539/manager/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.324376 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-lqskc_9780fbb4-beca-4c12-a4ca-b90e06fb59ee/registry-server/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.519605 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-l44l6_c394fabc-a9e5-4e6b-81bb-511228e8c0fb/manager/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.600936 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5988f4bb96-bmlnr_17fbd35a-c51b-4e33-b257-e8d10c67054c/manager/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.632750 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854689x6_ee5300e4-6c64-4919-9ac3-1e8a9779abc3/manager/0.log" Jan 28 17:56:44 crc kubenswrapper[5001]: I0128 17:56:44.831993 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-49ldn_13e9dd0f-e7c5-4959-9554-bf34549222cf/registry-server/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.003152 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-556755cfd4-d79zz_73f5ff01-c3cc-4fa1-b265-09a6716a24a5/manager/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.022124 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-ll4sm_f16d3db5-4f22-4dc1-8cd2-9cf7c10fec26/manager/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.191861 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-s9l5b_a00c19fe-2da2-45ce-81b6-a32c17bbb1e7/manager/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.251860 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dlw8k_95ef1fe7-914c-4c1e-9468-636a81ec6cce/operator/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.367770 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-92rdm_95fa542e-01b1-4cd6-878e-7afba27a9e5f/manager/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.413846 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-647dx_1ea33ae1-a3ae-4f47-b28d-166e582f8b83/manager/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.582341 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-djkj7_9ab5c237-6fba-4123-bbdd-051d9519d4fa/manager/0.log" Jan 28 17:56:45 crc kubenswrapper[5001]: I0128 17:56:45.687909 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-np74j_38bc9590-ffbd-4924-90aa-c24a44a29bd7/manager/0.log" Jan 28 17:56:48 crc kubenswrapper[5001]: I0128 17:56:48.036215 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-pcn2s"] Jan 28 17:56:48 crc kubenswrapper[5001]: I0128 17:56:48.044173 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-pcn2s"] Jan 28 17:56:48 crc kubenswrapper[5001]: I0128 17:56:48.604767 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b747464b-8e55-4056-a282-11d656d849dc" path="/var/lib/kubelet/pods/b747464b-8e55-4056-a282-11d656d849dc/volumes" Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.025435 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-5gvgn"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.031595 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-84be-account-create-update-rzrth"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.037700 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-tb26s"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.043702 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.050804 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.058637 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-tb26s"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.064883 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-84be-account-create-update-rzrth"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.071263 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-cda9-account-create-update-cwncp"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.077857 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-5gvgn"] Jan 28 17:56:49 crc kubenswrapper[5001]: I0128 17:56:49.085426 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-6ca3-account-create-update-q8v42"] Jan 28 17:56:50 crc kubenswrapper[5001]: I0128 17:56:50.622705 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11" path="/var/lib/kubelet/pods/7b1c1c16-1e2a-45e4-80a1-c865cb1d5a11/volumes" Jan 28 17:56:50 crc kubenswrapper[5001]: I0128 17:56:50.623771 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849dd9e3-fa3a-413f-909f-356d87e51427" path="/var/lib/kubelet/pods/849dd9e3-fa3a-413f-909f-356d87e51427/volumes" Jan 28 17:56:50 crc kubenswrapper[5001]: I0128 17:56:50.624531 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874f309a-0607-4381-99fa-eb25d0e34f02" path="/var/lib/kubelet/pods/874f309a-0607-4381-99fa-eb25d0e34f02/volumes" Jan 28 17:56:50 crc kubenswrapper[5001]: I0128 17:56:50.625232 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c76350-9eb6-495e-874f-a186c207bea6" path="/var/lib/kubelet/pods/b9c76350-9eb6-495e-874f-a186c207bea6/volumes" Jan 28 17:56:50 crc kubenswrapper[5001]: I0128 17:56:50.626242 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09009c8-2bc4-4a46-8243-9f226a5aa244" path="/var/lib/kubelet/pods/d09009c8-2bc4-4a46-8243-9f226a5aa244/volumes" Jan 28 17:56:54 crc kubenswrapper[5001]: I0128 17:56:54.600831 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:56:54 crc kubenswrapper[5001]: E0128 17:56:54.601510 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:56:58 crc kubenswrapper[5001]: I0128 17:56:58.025690 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz"] Jan 28 17:56:58 crc kubenswrapper[5001]: I0128 17:56:58.032781 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-dvstz"] Jan 28 17:56:58 crc kubenswrapper[5001]: I0128 17:56:58.602590 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f16db31-239d-4a00-8c6d-e50c10fbf407" path="/var/lib/kubelet/pods/7f16db31-239d-4a00-8c6d-e50c10fbf407/volumes" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.524275 5001 scope.go:117] "RemoveContainer" containerID="daff58fa8338d8a89e855b146c86a7c35e4ea1a75d2a16b00dd9f2b82616eee7" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.545695 5001 scope.go:117] "RemoveContainer" containerID="88a7e7523581a149933ecf7475ece153c78409eeb535fc1826f6f8a0820f1e66" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.583490 5001 scope.go:117] "RemoveContainer" containerID="40fec92a5a7a033031de6808dba64028963cf470744f8cc2acbf0a1137ab7322" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.622899 5001 scope.go:117] "RemoveContainer" containerID="60ad9422aec8abbf9dad0e9cf4ad47cf0de86d62b0a5a059a4af3a40ead4669c" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.643397 5001 scope.go:117] "RemoveContainer" containerID="6ee9353ab626a99c354759ba2ac6e9e2dd616c20b53c45f890b36a6819668b42" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.694271 5001 scope.go:117] "RemoveContainer" containerID="3d37cc8831ae3c4e892fa03e1574b1341938a982941cade14e14f31e6f349b09" Jan 28 17:57:01 crc kubenswrapper[5001]: I0128 17:57:01.715860 5001 scope.go:117] "RemoveContainer" containerID="4e05d46858e5c1e841905d6260e5a232b33fe918555b9e5925a5952104db8222" Jan 28 17:57:03 crc kubenswrapper[5001]: I0128 17:57:03.093414 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2h5l2_bc33c805-eeaf-40d2-977a-40c7fffc3b34/control-plane-machine-set-operator/0.log" Jan 28 17:57:03 crc kubenswrapper[5001]: I0128 17:57:03.274012 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jk8v9_ca322b78-934b-4119-a0f6-8037e473a1f9/machine-api-operator/0.log" Jan 28 17:57:03 crc kubenswrapper[5001]: I0128 17:57:03.288352 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jk8v9_ca322b78-934b-4119-a0f6-8037e473a1f9/kube-rbac-proxy/0.log" Jan 28 17:57:05 crc kubenswrapper[5001]: I0128 17:57:05.593965 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:57:05 crc kubenswrapper[5001]: E0128 17:57:05.594402 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:57:14 crc kubenswrapper[5001]: I0128 17:57:14.756474 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-gcfmc_8ee24a09-8b61-427a-a338-4a96a4a47716/cert-manager-controller/0.log" Jan 28 17:57:14 crc kubenswrapper[5001]: I0128 17:57:14.950002 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-wf68z_267af68d-70ec-485f-bb8a-abd72b8a5323/cert-manager-cainjector/0.log" Jan 28 17:57:15 crc kubenswrapper[5001]: I0128 17:57:15.004079 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-qr8rl_d2543c7d-8b33-4432-a05b-0e8d0b24a168/cert-manager-webhook/0.log" Jan 28 17:57:16 crc kubenswrapper[5001]: I0128 17:57:16.044367 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9"] Jan 28 17:57:16 crc kubenswrapper[5001]: I0128 17:57:16.050431 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p5mk9"] Jan 28 17:57:16 crc kubenswrapper[5001]: I0128 17:57:16.603433 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c2af0c-5c66-4f40-b9b4-10b4efec408a" path="/var/lib/kubelet/pods/b6c2af0c-5c66-4f40-b9b4-10b4efec408a/volumes" Jan 28 17:57:17 crc kubenswrapper[5001]: I0128 17:57:17.026451 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78"] Jan 28 17:57:17 crc kubenswrapper[5001]: I0128 17:57:17.032632 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-nsw78"] Jan 28 17:57:17 crc kubenswrapper[5001]: I0128 17:57:17.593726 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:57:17 crc kubenswrapper[5001]: E0128 17:57:17.593934 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:57:18 crc kubenswrapper[5001]: I0128 17:57:18.605896 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91ec979-f8eb-45b2-af41-a2040b954d89" path="/var/lib/kubelet/pods/c91ec979-f8eb-45b2-af41-a2040b954d89/volumes" Jan 28 17:57:27 crc kubenswrapper[5001]: I0128 17:57:27.196627 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwvwl_6aa45dc9-1c42-4dbc-b421-cd87505ab222/nmstate-console-plugin/0.log" Jan 28 17:57:27 crc kubenswrapper[5001]: I0128 17:57:27.330577 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v9mzz_f75ca53f-11de-4a98-93dc-0f269011b505/nmstate-handler/0.log" Jan 28 17:57:27 crc kubenswrapper[5001]: I0128 17:57:27.400572 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-t9m4t_4defca7c-f2c4-428f-b722-a1e9895e42fe/kube-rbac-proxy/0.log" Jan 28 17:57:27 crc kubenswrapper[5001]: I0128 17:57:27.519271 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-t9m4t_4defca7c-f2c4-428f-b722-a1e9895e42fe/nmstate-metrics/0.log" Jan 28 17:57:27 crc kubenswrapper[5001]: I0128 17:57:27.626813 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-rqzx8_1ab14cf7-b801-43af-868b-e05478534e41/nmstate-operator/0.log" Jan 28 17:57:27 crc kubenswrapper[5001]: I0128 17:57:27.722174 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fmrf6_a29a6746-0c3a-4887-ac29-530b8771c1dc/nmstate-webhook/0.log" Jan 28 17:57:28 crc kubenswrapper[5001]: I0128 17:57:28.594671 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:57:28 crc kubenswrapper[5001]: E0128 17:57:28.595293 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:57:35 crc kubenswrapper[5001]: I0128 17:57:35.029955 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm"] Jan 28 17:57:35 crc kubenswrapper[5001]: I0128 17:57:35.038457 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-7zhhm"] Jan 28 17:57:36 crc kubenswrapper[5001]: I0128 17:57:36.603964 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d576b9-5be5-4988-a627-4d6b96e55a64" path="/var/lib/kubelet/pods/19d576b9-5be5-4988-a627-4d6b96e55a64/volumes" Jan 28 17:57:40 crc kubenswrapper[5001]: I0128 17:57:40.594660 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:57:40 crc kubenswrapper[5001]: E0128 17:57:40.595407 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:57:52 crc kubenswrapper[5001]: I0128 17:57:52.594209 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:57:52 crc kubenswrapper[5001]: E0128 17:57:52.595106 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:57:53 crc kubenswrapper[5001]: I0128 17:57:53.697956 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-422s5_c93402b1-0843-4fac-980f-172929e0cb5e/kube-rbac-proxy/0.log" Jan 28 17:57:53 crc kubenswrapper[5001]: I0128 17:57:53.888020 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-422s5_c93402b1-0843-4fac-980f-172929e0cb5e/controller/0.log" Jan 28 17:57:53 crc kubenswrapper[5001]: I0128 17:57:53.969683 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-frr-files/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.063238 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-frr-files/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.094409 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-reloader/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.140345 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-metrics/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.155758 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-reloader/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.334611 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-reloader/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.334696 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-metrics/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.369504 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-frr-files/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.410364 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-metrics/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.648165 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-reloader/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.725839 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-metrics/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.726285 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/cp-frr-files/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.777297 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/controller/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.925056 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/frr-metrics/0.log" Jan 28 17:57:54 crc kubenswrapper[5001]: I0128 17:57:54.933298 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/kube-rbac-proxy/0.log" Jan 28 17:57:55 crc kubenswrapper[5001]: I0128 17:57:55.063466 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/kube-rbac-proxy-frr/0.log" Jan 28 17:57:55 crc kubenswrapper[5001]: I0128 17:57:55.126617 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/reloader/0.log" Jan 28 17:57:55 crc kubenswrapper[5001]: I0128 17:57:55.280882 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-92lkl_c196e172-3c33-4317-82c5-2dfbb916f6c4/frr-k8s-webhook-server/0.log" Jan 28 17:57:55 crc kubenswrapper[5001]: I0128 17:57:55.466726 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-847cf68876-29g2k_92f622d5-900f-4c24-b7d2-dfea8ccae720/manager/0.log" Jan 28 17:57:55 crc kubenswrapper[5001]: I0128 17:57:55.661113 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86cf947cc7-rx2x6_19c14ea1-ac12-4a0c-a10c-e65476d4aa41/webhook-server/0.log" Jan 28 17:57:55 crc kubenswrapper[5001]: I0128 17:57:55.763535 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fmxh8_4c2138ae-b9a4-4c2b-8049-ee00845be4d7/kube-rbac-proxy/0.log" Jan 28 17:57:56 crc kubenswrapper[5001]: I0128 17:57:56.111247 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fmxh8_4c2138ae-b9a4-4c2b-8049-ee00845be4d7/speaker/0.log" Jan 28 17:57:56 crc kubenswrapper[5001]: I0128 17:57:56.163513 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9mqw4_d4307f53-3a2c-4fb5-8c0b-395eaf4582bb/frr/0.log" Jan 28 17:58:01 crc kubenswrapper[5001]: I0128 17:58:01.825869 5001 scope.go:117] "RemoveContainer" containerID="c5945a10d5620d46b22e4d2495db5f14ebf539eb7324c0c3bf22cd0ec8850ef6" Jan 28 17:58:01 crc kubenswrapper[5001]: I0128 17:58:01.876682 5001 scope.go:117] "RemoveContainer" containerID="ba6f772a095a7aec34e99d19b02d196d35bbbf80633f19a39e35f4e504c1df9e" Jan 28 17:58:01 crc kubenswrapper[5001]: I0128 17:58:01.911521 5001 scope.go:117] "RemoveContainer" containerID="7c7410b8f82320e6c4c83a9f4d1215b7063c2bce5c83f9ec8bf8ad9c7bbcde49" Jan 28 17:58:04 crc kubenswrapper[5001]: I0128 17:58:04.594746 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:58:04 crc kubenswrapper[5001]: E0128 17:58:04.595227 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:58:12 crc kubenswrapper[5001]: I0128 17:58:12.134461 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-848f6f5b7c-k72xs_77f41721-97c0-4a00-83ca-c5fb40170cfa/keystone-api/0.log" Jan 28 17:58:12 crc kubenswrapper[5001]: I0128 17:58:12.686363 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_8bb877d2-6693-4274-a705-6551fe435fb2/nova-kuttl-api-api/0.log" Jan 28 17:58:12 crc kubenswrapper[5001]: I0128 17:58:12.848541 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_8bb877d2-6693-4274-a705-6551fe435fb2/nova-kuttl-api-log/0.log" Jan 28 17:58:12 crc kubenswrapper[5001]: I0128 17:58:12.951413 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_99fae513-1f96-42f3-9e69-e55b82c047dc/nova-kuttl-cell0-conductor-conductor/0.log" Jan 28 17:58:13 crc kubenswrapper[5001]: I0128 17:58:13.209525 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_95e53dae-9c1d-442b-b282-a377730ba93a/nova-kuttl-cell1-conductor-conductor/0.log" Jan 28 17:58:13 crc kubenswrapper[5001]: I0128 17:58:13.331101 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_1b94ad6a-ac77-455c-a73f-a9a047f5d714/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 28 17:58:13 crc kubenswrapper[5001]: I0128 17:58:13.555032 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_cdc03496-6661-435e-ae7f-c20ba5e7b381/nova-kuttl-metadata-log/0.log" Jan 28 17:58:13 crc kubenswrapper[5001]: I0128 17:58:13.562852 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_cdc03496-6661-435e-ae7f-c20ba5e7b381/nova-kuttl-metadata-metadata/0.log" Jan 28 17:58:13 crc kubenswrapper[5001]: I0128 17:58:13.789328 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_7e25214b-bf18-4e49-82d6-53519f9b2ccd/nova-kuttl-scheduler-scheduler/0.log" Jan 28 17:58:13 crc kubenswrapper[5001]: I0128 17:58:13.923409 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_27cec822-2561-4682-bb1c-3fe4fd0805f4/mysql-bootstrap/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.125226 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_27cec822-2561-4682-bb1c-3fe4fd0805f4/mysql-bootstrap/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.203488 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_cd92fc04-2e00-4c0a-b704-204aeeb70ff1/memcached/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.215880 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_27cec822-2561-4682-bb1c-3fe4fd0805f4/galera/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.300110 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_4efb190a-f4c2-4761-9fab-7e26fc702121/mysql-bootstrap/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.506360 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_4efb190a-f4c2-4761-9fab-7e26fc702121/mysql-bootstrap/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.537287 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_535ec83f-ed6c-460b-8369-d710976d266f/openstackclient/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.555859 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_4efb190a-f4c2-4761-9fab-7e26fc702121/galera/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.759575 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-b8bc4c9f4-htpjw_85ac1e3b-7a2c-4c90-adcb-34d4efd01f41/placement-api/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.766122 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-b8bc4c9f4-htpjw_85ac1e3b-7a2c-4c90-adcb-34d4efd01f41/placement-log/0.log" Jan 28 17:58:14 crc kubenswrapper[5001]: I0128 17:58:14.938410 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_bdcddc03-22a9-44af-8c79-67fe309358ec/setup-container/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.117167 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_bdcddc03-22a9-44af-8c79-67fe309358ec/setup-container/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.154057 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_bdcddc03-22a9-44af-8c79-67fe309358ec/rabbitmq/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.176882 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_1bc7adbb-0250-4320-98f4-7a0a69b77724/setup-container/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.359336 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_1bc7adbb-0250-4320-98f4-7a0a69b77724/setup-container/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.378852 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_420bd810-85d0-4ced-bcd8-3ae62c8c79e4/setup-container/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.395861 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_1bc7adbb-0250-4320-98f4-7a0a69b77724/rabbitmq/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.568424 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_420bd810-85d0-4ced-bcd8-3ae62c8c79e4/setup-container/0.log" Jan 28 17:58:15 crc kubenswrapper[5001]: I0128 17:58:15.630577 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_420bd810-85d0-4ced-bcd8-3ae62c8c79e4/rabbitmq/0.log" Jan 28 17:58:18 crc kubenswrapper[5001]: I0128 17:58:18.593949 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:58:18 crc kubenswrapper[5001]: E0128 17:58:18.594518 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:58:29 crc kubenswrapper[5001]: I0128 17:58:29.604820 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/util/0.log" Jan 28 17:58:29 crc kubenswrapper[5001]: I0128 17:58:29.991150 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/util/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.002910 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/pull/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.092387 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/pull/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.202675 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/util/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.247475 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/pull/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.288732 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akh5zr_7540c5dd-c168-474a-9e79-e0fd9fa9f8e8/extract/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.463215 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/util/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.574811 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/pull/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.578238 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/util/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.594555 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:58:30 crc kubenswrapper[5001]: E0128 17:58:30.594797 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.615484 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/pull/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.790610 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/extract/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.801068 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/pull/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.804943 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn7l9p_cda69916-8545-4983-b874-78620c94abbc/util/0.log" Jan 28 17:58:30 crc kubenswrapper[5001]: I0128 17:58:30.989250 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/util/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.149089 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/util/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.208736 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/pull/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.222787 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/pull/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.386637 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/pull/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.406624 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/util/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.447432 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7136ckmq_5f14b7ed-79e7-4ec3-888a-cf84bd18a9cc/extract/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.563662 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/extract-utilities/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.766369 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/extract-utilities/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.770236 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/extract-content/0.log" Jan 28 17:58:31 crc kubenswrapper[5001]: I0128 17:58:31.770319 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/extract-content/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.001653 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/extract-content/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.015005 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/extract-utilities/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.230291 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/extract-utilities/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.440462 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/extract-content/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.549928 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/extract-utilities/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.566496 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/extract-content/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.612226 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmrx4_8a1cfc19-6968-49a9-ac16-d66f8f79873e/registry-server/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.679683 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/extract-utilities/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.771830 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/extract-content/0.log" Jan 28 17:58:32 crc kubenswrapper[5001]: I0128 17:58:32.951845 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-92vjb_32ddd6d6-443e-4772-823f-9ff2580fa385/marketplace-operator/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.073265 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bb6js_459effeb-5d45-4ff0-92ec-cbd95f88d17c/registry-server/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.087711 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/extract-utilities/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.299805 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/extract-content/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.302014 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/extract-utilities/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.308805 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/extract-content/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.569953 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/extract-utilities/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.576014 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/extract-content/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.697940 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4q4m6_560f1835-9368-4231-9d85-b0cbcad12b8c/registry-server/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.828101 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/extract-utilities/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.994259 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/extract-utilities/0.log" Jan 28 17:58:33 crc kubenswrapper[5001]: I0128 17:58:33.997764 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/extract-content/0.log" Jan 28 17:58:34 crc kubenswrapper[5001]: I0128 17:58:34.011037 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/extract-content/0.log" Jan 28 17:58:34 crc kubenswrapper[5001]: I0128 17:58:34.237811 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/extract-content/0.log" Jan 28 17:58:34 crc kubenswrapper[5001]: I0128 17:58:34.292035 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/extract-utilities/0.log" Jan 28 17:58:34 crc kubenswrapper[5001]: I0128 17:58:34.695141 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tlg9_aedcee55-c6fd-4322-b635-e0fca159ef41/registry-server/0.log" Jan 28 17:58:42 crc kubenswrapper[5001]: I0128 17:58:42.594767 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:58:42 crc kubenswrapper[5001]: E0128 17:58:42.595425 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:58:53 crc kubenswrapper[5001]: I0128 17:58:53.594861 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:58:53 crc kubenswrapper[5001]: E0128 17:58:53.596177 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:59:06 crc kubenswrapper[5001]: I0128 17:59:06.594699 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:59:06 crc kubenswrapper[5001]: E0128 17:59:06.595766 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:59:17 crc kubenswrapper[5001]: I0128 17:59:17.595580 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:59:17 crc kubenswrapper[5001]: E0128 17:59:17.597217 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.696084 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rh7pg"] Jan 28 17:59:28 crc kubenswrapper[5001]: E0128 17:59:28.696872 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.696884 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="44595eef-a540-442d-8c7a-5f8bd2f2488c" containerName="nova-manage" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.698494 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.717028 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rh7pg"] Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.819212 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmrh\" (UniqueName: \"kubernetes.io/projected/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-kube-api-access-dbmrh\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.819338 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-utilities\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.819356 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-catalog-content\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.920758 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-utilities\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.920812 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-catalog-content\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.920869 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbmrh\" (UniqueName: \"kubernetes.io/projected/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-kube-api-access-dbmrh\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.921361 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-utilities\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.921447 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-catalog-content\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:28 crc kubenswrapper[5001]: I0128 17:59:28.940878 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbmrh\" (UniqueName: \"kubernetes.io/projected/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-kube-api-access-dbmrh\") pod \"community-operators-rh7pg\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:29 crc kubenswrapper[5001]: I0128 17:59:29.020183 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:29 crc kubenswrapper[5001]: I0128 17:59:29.507921 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rh7pg"] Jan 28 17:59:29 crc kubenswrapper[5001]: W0128 17:59:29.520131 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb576e5_a32f_4c92_94e3_5e835afc4ca5.slice/crio-d826f14397432c5af11584545ab883a81de7e2a18c429d8d3a429a5feac7e584 WatchSource:0}: Error finding container d826f14397432c5af11584545ab883a81de7e2a18c429d8d3a429a5feac7e584: Status 404 returned error can't find the container with id d826f14397432c5af11584545ab883a81de7e2a18c429d8d3a429a5feac7e584 Jan 28 17:59:29 crc kubenswrapper[5001]: I0128 17:59:29.577113 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rh7pg" event={"ID":"ebb576e5-a32f-4c92-94e3-5e835afc4ca5","Type":"ContainerStarted","Data":"d826f14397432c5af11584545ab883a81de7e2a18c429d8d3a429a5feac7e584"} Jan 28 17:59:30 crc kubenswrapper[5001]: I0128 17:59:30.586157 5001 generic.go:334] "Generic (PLEG): container finished" podID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerID="f69b3f5d247b40eff62bbe62d93b889f53df4c39c7e8368ef211be779cbc159d" exitCode=0 Jan 28 17:59:30 crc kubenswrapper[5001]: I0128 17:59:30.586215 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rh7pg" event={"ID":"ebb576e5-a32f-4c92-94e3-5e835afc4ca5","Type":"ContainerDied","Data":"f69b3f5d247b40eff62bbe62d93b889f53df4c39c7e8368ef211be779cbc159d"} Jan 28 17:59:30 crc kubenswrapper[5001]: I0128 17:59:30.587804 5001 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:59:30 crc kubenswrapper[5001]: I0128 17:59:30.593853 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:59:30 crc kubenswrapper[5001]: E0128 17:59:30.594069 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:59:32 crc kubenswrapper[5001]: I0128 17:59:32.613284 5001 generic.go:334] "Generic (PLEG): container finished" podID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerID="3f93c761758d7bfbc1292a2c959d29528d4ced140f11b7c73c5da6e9612b6f3b" exitCode=0 Jan 28 17:59:32 crc kubenswrapper[5001]: I0128 17:59:32.621132 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rh7pg" event={"ID":"ebb576e5-a32f-4c92-94e3-5e835afc4ca5","Type":"ContainerDied","Data":"3f93c761758d7bfbc1292a2c959d29528d4ced140f11b7c73c5da6e9612b6f3b"} Jan 28 17:59:33 crc kubenswrapper[5001]: I0128 17:59:33.626139 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rh7pg" event={"ID":"ebb576e5-a32f-4c92-94e3-5e835afc4ca5","Type":"ContainerStarted","Data":"e8b7a5ce932ca313097c6bd2d57315b0147ccb67353c2463dbeda81c0350ee18"} Jan 28 17:59:33 crc kubenswrapper[5001]: I0128 17:59:33.647693 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rh7pg" podStartSLOduration=3.064897623 podStartE2EDuration="5.647678651s" podCreationTimestamp="2026-01-28 17:59:28 +0000 UTC" firstStartedPulling="2026-01-28 17:59:30.58761174 +0000 UTC m=+2616.755399970" lastFinishedPulling="2026-01-28 17:59:33.170392758 +0000 UTC m=+2619.338180998" observedRunningTime="2026-01-28 17:59:33.643713568 +0000 UTC m=+2619.811501798" watchObservedRunningTime="2026-01-28 17:59:33.647678651 +0000 UTC m=+2619.815466881" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.076420 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jmr7"] Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.078791 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.090165 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jmr7"] Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.117212 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-utilities\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.117327 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmnd\" (UniqueName: \"kubernetes.io/projected/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-kube-api-access-vvmnd\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.117413 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-catalog-content\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.218802 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvmnd\" (UniqueName: \"kubernetes.io/projected/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-kube-api-access-vvmnd\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.218877 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-catalog-content\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.219051 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-utilities\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.219565 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-catalog-content\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.219573 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-utilities\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.239718 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvmnd\" (UniqueName: \"kubernetes.io/projected/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-kube-api-access-vvmnd\") pod \"redhat-operators-7jmr7\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.406963 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:35 crc kubenswrapper[5001]: I0128 17:59:35.980584 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jmr7"] Jan 28 17:59:36 crc kubenswrapper[5001]: I0128 17:59:36.653383 5001 generic.go:334] "Generic (PLEG): container finished" podID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerID="a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d" exitCode=0 Jan 28 17:59:36 crc kubenswrapper[5001]: I0128 17:59:36.653433 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerDied","Data":"a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d"} Jan 28 17:59:36 crc kubenswrapper[5001]: I0128 17:59:36.653656 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerStarted","Data":"03aaeaebe46ae421b6a942f37246bdc89fa7ddf63252f4735e588c22a8865f98"} Jan 28 17:59:37 crc kubenswrapper[5001]: I0128 17:59:37.662509 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerStarted","Data":"5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9"} Jan 28 17:59:38 crc kubenswrapper[5001]: I0128 17:59:38.670649 5001 generic.go:334] "Generic (PLEG): container finished" podID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerID="5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9" exitCode=0 Jan 28 17:59:38 crc kubenswrapper[5001]: I0128 17:59:38.670689 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerDied","Data":"5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9"} Jan 28 17:59:39 crc kubenswrapper[5001]: I0128 17:59:39.020341 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:39 crc kubenswrapper[5001]: I0128 17:59:39.021273 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:39 crc kubenswrapper[5001]: I0128 17:59:39.083177 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:39 crc kubenswrapper[5001]: I0128 17:59:39.714475 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:40 crc kubenswrapper[5001]: I0128 17:59:40.687896 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerStarted","Data":"25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589"} Jan 28 17:59:40 crc kubenswrapper[5001]: I0128 17:59:40.712409 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jmr7" podStartSLOduration=2.570050106 podStartE2EDuration="5.712386704s" podCreationTimestamp="2026-01-28 17:59:35 +0000 UTC" firstStartedPulling="2026-01-28 17:59:36.655225869 +0000 UTC m=+2622.823014099" lastFinishedPulling="2026-01-28 17:59:39.797562457 +0000 UTC m=+2625.965350697" observedRunningTime="2026-01-28 17:59:40.703545661 +0000 UTC m=+2626.871333901" watchObservedRunningTime="2026-01-28 17:59:40.712386704 +0000 UTC m=+2626.880174934" Jan 28 17:59:41 crc kubenswrapper[5001]: I0128 17:59:41.464265 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rh7pg"] Jan 28 17:59:42 crc kubenswrapper[5001]: I0128 17:59:42.707453 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rh7pg" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="registry-server" containerID="cri-o://e8b7a5ce932ca313097c6bd2d57315b0147ccb67353c2463dbeda81c0350ee18" gracePeriod=2 Jan 28 17:59:43 crc kubenswrapper[5001]: I0128 17:59:43.716731 5001 generic.go:334] "Generic (PLEG): container finished" podID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerID="e8b7a5ce932ca313097c6bd2d57315b0147ccb67353c2463dbeda81c0350ee18" exitCode=0 Jan 28 17:59:43 crc kubenswrapper[5001]: I0128 17:59:43.717140 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rh7pg" event={"ID":"ebb576e5-a32f-4c92-94e3-5e835afc4ca5","Type":"ContainerDied","Data":"e8b7a5ce932ca313097c6bd2d57315b0147ccb67353c2463dbeda81c0350ee18"} Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.029065 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.063234 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbmrh\" (UniqueName: \"kubernetes.io/projected/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-kube-api-access-dbmrh\") pod \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.063288 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-utilities\") pod \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.063385 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-catalog-content\") pod \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\" (UID: \"ebb576e5-a32f-4c92-94e3-5e835afc4ca5\") " Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.066592 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-utilities" (OuterVolumeSpecName: "utilities") pod "ebb576e5-a32f-4c92-94e3-5e835afc4ca5" (UID: "ebb576e5-a32f-4c92-94e3-5e835afc4ca5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.070500 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-kube-api-access-dbmrh" (OuterVolumeSpecName: "kube-api-access-dbmrh") pod "ebb576e5-a32f-4c92-94e3-5e835afc4ca5" (UID: "ebb576e5-a32f-4c92-94e3-5e835afc4ca5"). InnerVolumeSpecName "kube-api-access-dbmrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.111392 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebb576e5-a32f-4c92-94e3-5e835afc4ca5" (UID: "ebb576e5-a32f-4c92-94e3-5e835afc4ca5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.165096 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.165138 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbmrh\" (UniqueName: \"kubernetes.io/projected/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-kube-api-access-dbmrh\") on node \"crc\" DevicePath \"\"" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.165153 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb576e5-a32f-4c92-94e3-5e835afc4ca5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.603968 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:59:44 crc kubenswrapper[5001]: E0128 17:59:44.604242 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.745934 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rh7pg" event={"ID":"ebb576e5-a32f-4c92-94e3-5e835afc4ca5","Type":"ContainerDied","Data":"d826f14397432c5af11584545ab883a81de7e2a18c429d8d3a429a5feac7e584"} Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.746039 5001 scope.go:117] "RemoveContainer" containerID="e8b7a5ce932ca313097c6bd2d57315b0147ccb67353c2463dbeda81c0350ee18" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.747582 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rh7pg" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.773846 5001 scope.go:117] "RemoveContainer" containerID="3f93c761758d7bfbc1292a2c959d29528d4ced140f11b7c73c5da6e9612b6f3b" Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.774051 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rh7pg"] Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.780508 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rh7pg"] Jan 28 17:59:44 crc kubenswrapper[5001]: I0128 17:59:44.791206 5001 scope.go:117] "RemoveContainer" containerID="f69b3f5d247b40eff62bbe62d93b889f53df4c39c7e8368ef211be779cbc159d" Jan 28 17:59:45 crc kubenswrapper[5001]: I0128 17:59:45.407419 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:45 crc kubenswrapper[5001]: I0128 17:59:45.407477 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:46 crc kubenswrapper[5001]: I0128 17:59:46.451100 5001 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7jmr7" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="registry-server" probeResult="failure" output=< Jan 28 17:59:46 crc kubenswrapper[5001]: timeout: failed to connect service ":50051" within 1s Jan 28 17:59:46 crc kubenswrapper[5001]: > Jan 28 17:59:46 crc kubenswrapper[5001]: I0128 17:59:46.607855 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" path="/var/lib/kubelet/pods/ebb576e5-a32f-4c92-94e3-5e835afc4ca5/volumes" Jan 28 17:59:54 crc kubenswrapper[5001]: I0128 17:59:54.832416 5001 generic.go:334] "Generic (PLEG): container finished" podID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerID="0c5c93da5361b264e7e73d15ef19f3f0cfcf8fd44dcc38da47a1dea8dc751bcc" exitCode=0 Jan 28 17:59:54 crc kubenswrapper[5001]: I0128 17:59:54.832538 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pjnn8/must-gather-44722" event={"ID":"4cec17ac-482a-4e10-9a55-6f61b3b3eddf","Type":"ContainerDied","Data":"0c5c93da5361b264e7e73d15ef19f3f0cfcf8fd44dcc38da47a1dea8dc751bcc"} Jan 28 17:59:54 crc kubenswrapper[5001]: I0128 17:59:54.833593 5001 scope.go:117] "RemoveContainer" containerID="0c5c93da5361b264e7e73d15ef19f3f0cfcf8fd44dcc38da47a1dea8dc751bcc" Jan 28 17:59:55 crc kubenswrapper[5001]: I0128 17:59:55.447778 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:55 crc kubenswrapper[5001]: I0128 17:59:55.493811 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:55 crc kubenswrapper[5001]: I0128 17:59:55.539460 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pjnn8_must-gather-44722_4cec17ac-482a-4e10-9a55-6f61b3b3eddf/gather/0.log" Jan 28 17:59:55 crc kubenswrapper[5001]: I0128 17:59:55.688368 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jmr7"] Jan 28 17:59:56 crc kubenswrapper[5001]: I0128 17:59:56.853951 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7jmr7" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="registry-server" containerID="cri-o://25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589" gracePeriod=2 Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.361091 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.494430 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-catalog-content\") pod \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.494517 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvmnd\" (UniqueName: \"kubernetes.io/projected/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-kube-api-access-vvmnd\") pod \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.494567 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-utilities\") pod \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\" (UID: \"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658\") " Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.495557 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-utilities" (OuterVolumeSpecName: "utilities") pod "e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" (UID: "e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.499226 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-kube-api-access-vvmnd" (OuterVolumeSpecName: "kube-api-access-vvmnd") pod "e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" (UID: "e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658"). InnerVolumeSpecName "kube-api-access-vvmnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.596885 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvmnd\" (UniqueName: \"kubernetes.io/projected/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-kube-api-access-vvmnd\") on node \"crc\" DevicePath \"\"" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.596934 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.646135 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" (UID: "e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.699106 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.864309 5001 generic.go:334] "Generic (PLEG): container finished" podID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerID="25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589" exitCode=0 Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.864351 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerDied","Data":"25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589"} Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.864379 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jmr7" event={"ID":"e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658","Type":"ContainerDied","Data":"03aaeaebe46ae421b6a942f37246bdc89fa7ddf63252f4735e588c22a8865f98"} Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.864395 5001 scope.go:117] "RemoveContainer" containerID="25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.864409 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jmr7" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.882996 5001 scope.go:117] "RemoveContainer" containerID="5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.908035 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jmr7"] Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.914892 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7jmr7"] Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.919094 5001 scope.go:117] "RemoveContainer" containerID="a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.942111 5001 scope.go:117] "RemoveContainer" containerID="25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589" Jan 28 17:59:57 crc kubenswrapper[5001]: E0128 17:59:57.942516 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589\": container with ID starting with 25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589 not found: ID does not exist" containerID="25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.942630 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589"} err="failed to get container status \"25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589\": rpc error: code = NotFound desc = could not find container \"25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589\": container with ID starting with 25cd14093aee34614174631efde0a14135382dfd03d4e4f60db5b6b24318d589 not found: ID does not exist" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.942733 5001 scope.go:117] "RemoveContainer" containerID="5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9" Jan 28 17:59:57 crc kubenswrapper[5001]: E0128 17:59:57.943178 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9\": container with ID starting with 5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9 not found: ID does not exist" containerID="5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.943274 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9"} err="failed to get container status \"5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9\": rpc error: code = NotFound desc = could not find container \"5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9\": container with ID starting with 5b890d9ee1fd921b0a39542295b153d30e7324b206971c21a10ac5665076b8a9 not found: ID does not exist" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.943383 5001 scope.go:117] "RemoveContainer" containerID="a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d" Jan 28 17:59:57 crc kubenswrapper[5001]: E0128 17:59:57.943754 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d\": container with ID starting with a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d not found: ID does not exist" containerID="a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d" Jan 28 17:59:57 crc kubenswrapper[5001]: I0128 17:59:57.943842 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d"} err="failed to get container status \"a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d\": rpc error: code = NotFound desc = could not find container \"a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d\": container with ID starting with a1e4ae9f09dc943fea8ff03c065b6ca7dbaaf729a0b6a8c7085634dae07b931d not found: ID does not exist" Jan 28 17:59:58 crc kubenswrapper[5001]: I0128 17:59:58.603222 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" path="/var/lib/kubelet/pods/e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658/volumes" Jan 28 17:59:59 crc kubenswrapper[5001]: I0128 17:59:59.594373 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 17:59:59 crc kubenswrapper[5001]: E0128 17:59:59.595006 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.149307 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr"] Jan 28 18:00:00 crc kubenswrapper[5001]: E0128 18:00:00.150542 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="extract-utilities" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.150666 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="extract-utilities" Jan 28 18:00:00 crc kubenswrapper[5001]: E0128 18:00:00.150761 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="registry-server" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.151234 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="registry-server" Jan 28 18:00:00 crc kubenswrapper[5001]: E0128 18:00:00.151349 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="extract-content" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.151436 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="extract-content" Jan 28 18:00:00 crc kubenswrapper[5001]: E0128 18:00:00.151527 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="extract-utilities" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.151601 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="extract-utilities" Jan 28 18:00:00 crc kubenswrapper[5001]: E0128 18:00:00.151678 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="registry-server" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.152423 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="registry-server" Jan 28 18:00:00 crc kubenswrapper[5001]: E0128 18:00:00.152506 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="extract-content" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.152590 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="extract-content" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.152854 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb576e5-a32f-4c92-94e3-5e835afc4ca5" containerName="registry-server" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.152945 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bf19f6-ec78-4c6f-9cd7-2706dbb1b658" containerName="registry-server" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.155183 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.157856 5001 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.158781 5001 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.161445 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr"] Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.341831 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf11d03c-b0c1-4de2-a8d7-24256617a736-config-volume\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.341893 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf11d03c-b0c1-4de2-a8d7-24256617a736-secret-volume\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.342041 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6tb\" (UniqueName: \"kubernetes.io/projected/cf11d03c-b0c1-4de2-a8d7-24256617a736-kube-api-access-6v6tb\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.443887 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v6tb\" (UniqueName: \"kubernetes.io/projected/cf11d03c-b0c1-4de2-a8d7-24256617a736-kube-api-access-6v6tb\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.444070 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf11d03c-b0c1-4de2-a8d7-24256617a736-config-volume\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.444100 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf11d03c-b0c1-4de2-a8d7-24256617a736-secret-volume\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.445031 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf11d03c-b0c1-4de2-a8d7-24256617a736-config-volume\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.457404 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf11d03c-b0c1-4de2-a8d7-24256617a736-secret-volume\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.459616 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v6tb\" (UniqueName: \"kubernetes.io/projected/cf11d03c-b0c1-4de2-a8d7-24256617a736-kube-api-access-6v6tb\") pod \"collect-profiles-29493720-fnmlr\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.476397 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:00 crc kubenswrapper[5001]: I0128 18:00:00.895376 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr"] Jan 28 18:00:00 crc kubenswrapper[5001]: W0128 18:00:00.901483 5001 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf11d03c_b0c1_4de2_a8d7_24256617a736.slice/crio-45e875f95928a6bbbc8b577f47900b486cee510413864c4aa1ec0632236ab4da WatchSource:0}: Error finding container 45e875f95928a6bbbc8b577f47900b486cee510413864c4aa1ec0632236ab4da: Status 404 returned error can't find the container with id 45e875f95928a6bbbc8b577f47900b486cee510413864c4aa1ec0632236ab4da Jan 28 18:00:01 crc kubenswrapper[5001]: I0128 18:00:01.901420 5001 generic.go:334] "Generic (PLEG): container finished" podID="cf11d03c-b0c1-4de2-a8d7-24256617a736" containerID="f6f1955054784d88189575c9187b45601f7a92012bbf0732a6563eee70e494d0" exitCode=0 Jan 28 18:00:01 crc kubenswrapper[5001]: I0128 18:00:01.901470 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" event={"ID":"cf11d03c-b0c1-4de2-a8d7-24256617a736","Type":"ContainerDied","Data":"f6f1955054784d88189575c9187b45601f7a92012bbf0732a6563eee70e494d0"} Jan 28 18:00:01 crc kubenswrapper[5001]: I0128 18:00:01.901743 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" event={"ID":"cf11d03c-b0c1-4de2-a8d7-24256617a736","Type":"ContainerStarted","Data":"45e875f95928a6bbbc8b577f47900b486cee510413864c4aa1ec0632236ab4da"} Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.203636 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.390927 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf11d03c-b0c1-4de2-a8d7-24256617a736-secret-volume\") pod \"cf11d03c-b0c1-4de2-a8d7-24256617a736\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.391155 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf11d03c-b0c1-4de2-a8d7-24256617a736-config-volume\") pod \"cf11d03c-b0c1-4de2-a8d7-24256617a736\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.391203 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v6tb\" (UniqueName: \"kubernetes.io/projected/cf11d03c-b0c1-4de2-a8d7-24256617a736-kube-api-access-6v6tb\") pod \"cf11d03c-b0c1-4de2-a8d7-24256617a736\" (UID: \"cf11d03c-b0c1-4de2-a8d7-24256617a736\") " Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.392769 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf11d03c-b0c1-4de2-a8d7-24256617a736-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf11d03c-b0c1-4de2-a8d7-24256617a736" (UID: "cf11d03c-b0c1-4de2-a8d7-24256617a736"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.397094 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf11d03c-b0c1-4de2-a8d7-24256617a736-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf11d03c-b0c1-4de2-a8d7-24256617a736" (UID: "cf11d03c-b0c1-4de2-a8d7-24256617a736"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.397296 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf11d03c-b0c1-4de2-a8d7-24256617a736-kube-api-access-6v6tb" (OuterVolumeSpecName: "kube-api-access-6v6tb") pod "cf11d03c-b0c1-4de2-a8d7-24256617a736" (UID: "cf11d03c-b0c1-4de2-a8d7-24256617a736"). InnerVolumeSpecName "kube-api-access-6v6tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.493458 5001 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf11d03c-b0c1-4de2-a8d7-24256617a736-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.493520 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v6tb\" (UniqueName: \"kubernetes.io/projected/cf11d03c-b0c1-4de2-a8d7-24256617a736-kube-api-access-6v6tb\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.493531 5001 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf11d03c-b0c1-4de2-a8d7-24256617a736-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.683491 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pjnn8/must-gather-44722"] Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.683724 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pjnn8/must-gather-44722" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="copy" containerID="cri-o://8e2d92b634a95a2ff55c95ae23ee7f5e1315173d6c329b5ae26a9c39b93bfdeb" gracePeriod=2 Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.690688 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pjnn8/must-gather-44722"] Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.923735 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.926248 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-fnmlr" event={"ID":"cf11d03c-b0c1-4de2-a8d7-24256617a736","Type":"ContainerDied","Data":"45e875f95928a6bbbc8b577f47900b486cee510413864c4aa1ec0632236ab4da"} Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.926301 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e875f95928a6bbbc8b577f47900b486cee510413864c4aa1ec0632236ab4da" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.928656 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pjnn8_must-gather-44722_4cec17ac-482a-4e10-9a55-6f61b3b3eddf/copy/0.log" Jan 28 18:00:03 crc kubenswrapper[5001]: I0128 18:00:03.929254 5001 generic.go:334] "Generic (PLEG): container finished" podID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerID="8e2d92b634a95a2ff55c95ae23ee7f5e1315173d6c329b5ae26a9c39b93bfdeb" exitCode=143 Jan 28 18:00:04 crc kubenswrapper[5001]: E0128 18:00:04.066556 5001 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf11d03c_b0c1_4de2_a8d7_24256617a736.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.069017 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pjnn8_must-gather-44722_4cec17ac-482a-4e10-9a55-6f61b3b3eddf/copy/0.log" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.069463 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.204401 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-must-gather-output\") pod \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.204659 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6dkw\" (UniqueName: \"kubernetes.io/projected/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-kube-api-access-z6dkw\") pod \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\" (UID: \"4cec17ac-482a-4e10-9a55-6f61b3b3eddf\") " Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.209659 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-kube-api-access-z6dkw" (OuterVolumeSpecName: "kube-api-access-z6dkw") pod "4cec17ac-482a-4e10-9a55-6f61b3b3eddf" (UID: "4cec17ac-482a-4e10-9a55-6f61b3b3eddf"). InnerVolumeSpecName "kube-api-access-z6dkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.263365 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb"] Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.269609 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-8hkbb"] Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.306365 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6dkw\" (UniqueName: \"kubernetes.io/projected/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-kube-api-access-z6dkw\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.317902 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4cec17ac-482a-4e10-9a55-6f61b3b3eddf" (UID: "4cec17ac-482a-4e10-9a55-6f61b3b3eddf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.408501 5001 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4cec17ac-482a-4e10-9a55-6f61b3b3eddf-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.603774 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" path="/var/lib/kubelet/pods/4cec17ac-482a-4e10-9a55-6f61b3b3eddf/volumes" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.610585 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a92c043-2e58-4a72-9ecb-024736e0ff21" path="/var/lib/kubelet/pods/6a92c043-2e58-4a72-9ecb-024736e0ff21/volumes" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.940713 5001 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pjnn8_must-gather-44722_4cec17ac-482a-4e10-9a55-6f61b3b3eddf/copy/0.log" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.941948 5001 scope.go:117] "RemoveContainer" containerID="8e2d92b634a95a2ff55c95ae23ee7f5e1315173d6c329b5ae26a9c39b93bfdeb" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.942033 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pjnn8/must-gather-44722" Jan 28 18:00:04 crc kubenswrapper[5001]: I0128 18:00:04.966664 5001 scope.go:117] "RemoveContainer" containerID="0c5c93da5361b264e7e73d15ef19f3f0cfcf8fd44dcc38da47a1dea8dc751bcc" Jan 28 18:00:14 crc kubenswrapper[5001]: I0128 18:00:14.597702 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 18:00:14 crc kubenswrapper[5001]: E0128 18:00:14.602608 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 18:00:27 crc kubenswrapper[5001]: I0128 18:00:27.593889 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 18:00:27 crc kubenswrapper[5001]: E0128 18:00:27.594809 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 18:00:40 crc kubenswrapper[5001]: I0128 18:00:40.594133 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 18:00:40 crc kubenswrapper[5001]: E0128 18:00:40.594865 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 18:00:51 crc kubenswrapper[5001]: I0128 18:00:51.594356 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 18:00:51 crc kubenswrapper[5001]: E0128 18:00:51.595494 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.159992 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-cron-29493721-j5rbz"] Jan 28 18:01:00 crc kubenswrapper[5001]: E0128 18:01:00.160845 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf11d03c-b0c1-4de2-a8d7-24256617a736" containerName="collect-profiles" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.160861 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf11d03c-b0c1-4de2-a8d7-24256617a736" containerName="collect-profiles" Jan 28 18:01:00 crc kubenswrapper[5001]: E0128 18:01:00.160888 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="copy" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.160898 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="copy" Jan 28 18:01:00 crc kubenswrapper[5001]: E0128 18:01:00.160917 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="gather" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.160926 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="gather" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.161099 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf11d03c-b0c1-4de2-a8d7-24256617a736" containerName="collect-profiles" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.161113 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="gather" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.161120 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cec17ac-482a-4e10-9a55-6f61b3b3eddf" containerName="copy" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.161693 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.171366 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-cron-29493721-j5rbz"] Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.243086 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5vm\" (UniqueName: \"kubernetes.io/projected/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-kube-api-access-pm5vm\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.243179 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-fernet-keys\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.243387 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-config-data\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.243589 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-combined-ca-bundle\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.344767 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-combined-ca-bundle\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.344878 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm5vm\" (UniqueName: \"kubernetes.io/projected/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-kube-api-access-pm5vm\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.344951 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-fernet-keys\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.345054 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-config-data\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.352410 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-config-data\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.356048 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-combined-ca-bundle\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.363708 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-fernet-keys\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.367241 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm5vm\" (UniqueName: \"kubernetes.io/projected/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-kube-api-access-pm5vm\") pod \"keystone-cron-29493721-j5rbz\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.517403 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:00 crc kubenswrapper[5001]: I0128 18:01:00.938559 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-cron-29493721-j5rbz"] Jan 28 18:01:01 crc kubenswrapper[5001]: I0128 18:01:01.383771 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" event={"ID":"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd","Type":"ContainerStarted","Data":"a4f2a23148575a11a1cf113ec23918de482e64750ff300eb72a5b4bacb22f9e4"} Jan 28 18:01:01 crc kubenswrapper[5001]: I0128 18:01:01.384071 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" event={"ID":"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd","Type":"ContainerStarted","Data":"c8afaaf34fccaa40edce5f03dd1f60b27a6da462d39274068ae883a7c88b7940"} Jan 28 18:01:01 crc kubenswrapper[5001]: I0128 18:01:01.401392 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" podStartSLOduration=1.401360111 podStartE2EDuration="1.401360111s" podCreationTimestamp="2026-01-28 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:01:01.398321374 +0000 UTC m=+2707.566109614" watchObservedRunningTime="2026-01-28 18:01:01.401360111 +0000 UTC m=+2707.569148341" Jan 28 18:01:02 crc kubenswrapper[5001]: I0128 18:01:02.063243 5001 scope.go:117] "RemoveContainer" containerID="405de87de5ed840d57728124435148c7f4615ef4dca8475e725b5f41c7b6bb0a" Jan 28 18:01:03 crc kubenswrapper[5001]: I0128 18:01:03.400376 5001 generic.go:334] "Generic (PLEG): container finished" podID="d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" containerID="a4f2a23148575a11a1cf113ec23918de482e64750ff300eb72a5b4bacb22f9e4" exitCode=0 Jan 28 18:01:03 crc kubenswrapper[5001]: I0128 18:01:03.400432 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" event={"ID":"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd","Type":"ContainerDied","Data":"a4f2a23148575a11a1cf113ec23918de482e64750ff300eb72a5b4bacb22f9e4"} Jan 28 18:01:03 crc kubenswrapper[5001]: I0128 18:01:03.594360 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 18:01:03 crc kubenswrapper[5001]: E0128 18:01:03.594778 5001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mqgwk_openshift-machine-config-operator(8de2d052-6f7c-4345-91fa-ba2fc7532251)\"" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" podUID="8de2d052-6f7c-4345-91fa-ba2fc7532251" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.693925 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.811730 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-combined-ca-bundle\") pod \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.811783 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-config-data\") pod \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.811820 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-fernet-keys\") pod \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.811918 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm5vm\" (UniqueName: \"kubernetes.io/projected/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-kube-api-access-pm5vm\") pod \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\" (UID: \"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd\") " Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.822187 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" (UID: "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.823118 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-kube-api-access-pm5vm" (OuterVolumeSpecName: "kube-api-access-pm5vm") pod "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" (UID: "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd"). InnerVolumeSpecName "kube-api-access-pm5vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.833267 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" (UID: "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.864323 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-config-data" (OuterVolumeSpecName: "config-data") pod "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" (UID: "d87be8fe-65fc-43e1-8c3c-4c76625bb5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.914187 5001 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.914217 5001 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.914227 5001 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:04 crc kubenswrapper[5001]: I0128 18:01:04.914236 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm5vm\" (UniqueName: \"kubernetes.io/projected/d87be8fe-65fc-43e1-8c3c-4c76625bb5bd-kube-api-access-pm5vm\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:05 crc kubenswrapper[5001]: I0128 18:01:05.423557 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" event={"ID":"d87be8fe-65fc-43e1-8c3c-4c76625bb5bd","Type":"ContainerDied","Data":"c8afaaf34fccaa40edce5f03dd1f60b27a6da462d39274068ae883a7c88b7940"} Jan 28 18:01:05 crc kubenswrapper[5001]: I0128 18:01:05.423807 5001 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8afaaf34fccaa40edce5f03dd1f60b27a6da462d39274068ae883a7c88b7940" Jan 28 18:01:05 crc kubenswrapper[5001]: I0128 18:01:05.423616 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-cron-29493721-j5rbz" Jan 28 18:01:15 crc kubenswrapper[5001]: I0128 18:01:15.595633 5001 scope.go:117] "RemoveContainer" containerID="e29ddb8f89b83e6ede1d76372292b845b1b9d692905407907af3cda336cc2cb3" Jan 28 18:01:16 crc kubenswrapper[5001]: I0128 18:01:16.507271 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mqgwk" event={"ID":"8de2d052-6f7c-4345-91fa-ba2fc7532251","Type":"ContainerStarted","Data":"794e6f8e0f0ee1b4cb89ac74eaa815d58795c99806a0908d6822387a686ef2dd"} Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.627305 5001 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gvq2"] Jan 28 18:02:14 crc kubenswrapper[5001]: E0128 18:02:14.629114 5001 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" containerName="keystone-cron" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.629204 5001 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" containerName="keystone-cron" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.629419 5001 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87be8fe-65fc-43e1-8c3c-4c76625bb5bd" containerName="keystone-cron" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.641145 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.651815 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gvq2"] Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.699598 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-utilities\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.700227 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-catalog-content\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.700458 5001 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx725\" (UniqueName: \"kubernetes.io/projected/20e8509b-58d7-47b4-b3ec-22f8dc293d95-kube-api-access-qx725\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.802453 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-catalog-content\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.802747 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx725\" (UniqueName: \"kubernetes.io/projected/20e8509b-58d7-47b4-b3ec-22f8dc293d95-kube-api-access-qx725\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.802918 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-catalog-content\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.802921 5001 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-utilities\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.803421 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-utilities\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.828382 5001 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx725\" (UniqueName: \"kubernetes.io/projected/20e8509b-58d7-47b4-b3ec-22f8dc293d95-kube-api-access-qx725\") pod \"redhat-marketplace-6gvq2\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:14 crc kubenswrapper[5001]: I0128 18:02:14.966105 5001 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:15 crc kubenswrapper[5001]: I0128 18:02:15.409750 5001 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gvq2"] Jan 28 18:02:15 crc kubenswrapper[5001]: I0128 18:02:15.964169 5001 generic.go:334] "Generic (PLEG): container finished" podID="20e8509b-58d7-47b4-b3ec-22f8dc293d95" containerID="6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2" exitCode=0 Jan 28 18:02:15 crc kubenswrapper[5001]: I0128 18:02:15.964469 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gvq2" event={"ID":"20e8509b-58d7-47b4-b3ec-22f8dc293d95","Type":"ContainerDied","Data":"6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2"} Jan 28 18:02:15 crc kubenswrapper[5001]: I0128 18:02:15.964493 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gvq2" event={"ID":"20e8509b-58d7-47b4-b3ec-22f8dc293d95","Type":"ContainerStarted","Data":"b97d8b4212b73176cece19734d2848be3bc3159e1a1df6d43998c9d0848d59a2"} Jan 28 18:02:16 crc kubenswrapper[5001]: I0128 18:02:16.977622 5001 generic.go:334] "Generic (PLEG): container finished" podID="20e8509b-58d7-47b4-b3ec-22f8dc293d95" containerID="e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5" exitCode=0 Jan 28 18:02:16 crc kubenswrapper[5001]: I0128 18:02:16.977721 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gvq2" event={"ID":"20e8509b-58d7-47b4-b3ec-22f8dc293d95","Type":"ContainerDied","Data":"e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5"} Jan 28 18:02:17 crc kubenswrapper[5001]: I0128 18:02:17.989812 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gvq2" event={"ID":"20e8509b-58d7-47b4-b3ec-22f8dc293d95","Type":"ContainerStarted","Data":"c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a"} Jan 28 18:02:18 crc kubenswrapper[5001]: I0128 18:02:18.011473 5001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gvq2" podStartSLOduration=2.5645774709999998 podStartE2EDuration="4.01144885s" podCreationTimestamp="2026-01-28 18:02:14 +0000 UTC" firstStartedPulling="2026-01-28 18:02:15.965798158 +0000 UTC m=+2782.133586388" lastFinishedPulling="2026-01-28 18:02:17.412669537 +0000 UTC m=+2783.580457767" observedRunningTime="2026-01-28 18:02:18.009824854 +0000 UTC m=+2784.177613084" watchObservedRunningTime="2026-01-28 18:02:18.01144885 +0000 UTC m=+2784.179237100" Jan 28 18:02:24 crc kubenswrapper[5001]: I0128 18:02:24.966493 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:24 crc kubenswrapper[5001]: I0128 18:02:24.966874 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:25 crc kubenswrapper[5001]: I0128 18:02:25.019771 5001 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:25 crc kubenswrapper[5001]: I0128 18:02:25.107961 5001 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:25 crc kubenswrapper[5001]: I0128 18:02:25.264889 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gvq2"] Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.078816 5001 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gvq2" podUID="20e8509b-58d7-47b4-b3ec-22f8dc293d95" containerName="registry-server" containerID="cri-o://c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a" gracePeriod=2 Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.696290 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.831071 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx725\" (UniqueName: \"kubernetes.io/projected/20e8509b-58d7-47b4-b3ec-22f8dc293d95-kube-api-access-qx725\") pod \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.831212 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-catalog-content\") pod \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.831348 5001 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-utilities\") pod \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\" (UID: \"20e8509b-58d7-47b4-b3ec-22f8dc293d95\") " Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.832266 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-utilities" (OuterVolumeSpecName: "utilities") pod "20e8509b-58d7-47b4-b3ec-22f8dc293d95" (UID: "20e8509b-58d7-47b4-b3ec-22f8dc293d95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.832541 5001 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.840858 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e8509b-58d7-47b4-b3ec-22f8dc293d95-kube-api-access-qx725" (OuterVolumeSpecName: "kube-api-access-qx725") pod "20e8509b-58d7-47b4-b3ec-22f8dc293d95" (UID: "20e8509b-58d7-47b4-b3ec-22f8dc293d95"). InnerVolumeSpecName "kube-api-access-qx725". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.854752 5001 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20e8509b-58d7-47b4-b3ec-22f8dc293d95" (UID: "20e8509b-58d7-47b4-b3ec-22f8dc293d95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.934657 5001 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx725\" (UniqueName: \"kubernetes.io/projected/20e8509b-58d7-47b4-b3ec-22f8dc293d95-kube-api-access-qx725\") on node \"crc\" DevicePath \"\"" Jan 28 18:02:27 crc kubenswrapper[5001]: I0128 18:02:27.934699 5001 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20e8509b-58d7-47b4-b3ec-22f8dc293d95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.090395 5001 generic.go:334] "Generic (PLEG): container finished" podID="20e8509b-58d7-47b4-b3ec-22f8dc293d95" containerID="c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a" exitCode=0 Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.090451 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gvq2" event={"ID":"20e8509b-58d7-47b4-b3ec-22f8dc293d95","Type":"ContainerDied","Data":"c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a"} Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.090493 5001 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gvq2" event={"ID":"20e8509b-58d7-47b4-b3ec-22f8dc293d95","Type":"ContainerDied","Data":"b97d8b4212b73176cece19734d2848be3bc3159e1a1df6d43998c9d0848d59a2"} Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.090518 5001 scope.go:117] "RemoveContainer" containerID="c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.090513 5001 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gvq2" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.138218 5001 scope.go:117] "RemoveContainer" containerID="e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.139624 5001 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gvq2"] Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.145860 5001 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gvq2"] Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.157851 5001 scope.go:117] "RemoveContainer" containerID="6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.185807 5001 scope.go:117] "RemoveContainer" containerID="c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a" Jan 28 18:02:28 crc kubenswrapper[5001]: E0128 18:02:28.186316 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a\": container with ID starting with c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a not found: ID does not exist" containerID="c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.186358 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a"} err="failed to get container status \"c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a\": rpc error: code = NotFound desc = could not find container \"c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a\": container with ID starting with c86de0329bd2e7c7655ecb903892e092c3ba16b66b66eb8f5e870208aac7a02a not found: ID does not exist" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.186380 5001 scope.go:117] "RemoveContainer" containerID="e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5" Jan 28 18:02:28 crc kubenswrapper[5001]: E0128 18:02:28.186663 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5\": container with ID starting with e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5 not found: ID does not exist" containerID="e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.186691 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5"} err="failed to get container status \"e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5\": rpc error: code = NotFound desc = could not find container \"e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5\": container with ID starting with e12022b23ff0bfda52d2308e9be7fef0f44fdb5cee5682dc5fd26eaf5a4fe8f5 not found: ID does not exist" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.186705 5001 scope.go:117] "RemoveContainer" containerID="6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2" Jan 28 18:02:28 crc kubenswrapper[5001]: E0128 18:02:28.186952 5001 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2\": container with ID starting with 6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2 not found: ID does not exist" containerID="6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.186995 5001 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2"} err="failed to get container status \"6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2\": rpc error: code = NotFound desc = could not find container \"6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2\": container with ID starting with 6a471f8cfa57da4064a265470f4619738ba17e0fe3d3d1a91d83527de7ec23a2 not found: ID does not exist" Jan 28 18:02:28 crc kubenswrapper[5001]: I0128 18:02:28.608371 5001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e8509b-58d7-47b4-b3ec-22f8dc293d95" path="/var/lib/kubelet/pods/20e8509b-58d7-47b4-b3ec-22f8dc293d95/volumes"